This is a work of fiction: all places and people are fictional, all ideas are hypothetical and do not necessarily represent my (or anyone else’s) actual opinion of how the world does or should work.
Another dozen captioned images of mangled corpses and demolished buildings scrolled by. Lester gave each a cursory glance before mechanically pressing ‘a’ to accept each one. He had just greenlit a major raid on neighborhood and they were going to need all the water-muddying they could get to avoid a complete collapse of support from the populace.
He briefly flashed back to his days in the Introduction to Population Influence class; his instructors had droned endlessly about how to properly conduct photo-ops and publicize the positive sides of occupation - peace and order - to most effectively undercut violent insurgent propaganda. As it turned out, they had all been wrong; when it came to pictures of smiling local children in shiny new schools against 5.7-second videos of_other_ local children caught in exploding homes there was no ratio high enough to matter. The positive had to happening to you, the negative could just be a few seconds of dubiously-sourced video from somewhere you could imagine being to erase the positive. Thus came what the instructors called “Negative Adversary Portrayal”: focusing all information operations on showing the insurgents did the exact same types of things the peacekeeping forces did.
This had two key advantages: first, once everyone hit a certain bottom moral threshold the populace didn’t try to make choices based on morality or high-minded principles of any kind. This left them with materialistic concerns to use to choose a side, so the peacekeepers just needed to employ directed aid and co-opt the right local strongmen to become the favored materialistic choice.
Second, the impact of negative covered was a logarithmic curve: under the old ideas of positive portrayals, a single botched airstrike would completely erase the positive media. Conversely, when all the news was bad, that botched airstrike may barely be noticed by the population, much less have much impact over the footage of all the footage of other collateral damage inflicted by both sides. The latter advantage, for instance, was what enabled Lester to greenlight the raid that was about to commence. The neighborhood in question was still densely populated with unaffiliated civilians, and home to two of the few factories still functioning and providing some sort of normal employment in the city. It also happened to hold the shiny new school the peacekeepers had finished just last month. The raid was likely to destroy at least two-out-of-three, not to mention the attendent civilian casualties. That would have made it completely infeasible under the old-school, but analytics said it would barely shift the overall population sentiment in the current information environment. Still, every little bit helped, so a fresh wave of posts about insurgent attacks was being pushed out to try to blunt the sentiment damage of the raid. Another, more substantial wave, would be pushed out after the raid, to help provide fodder for local peacekeeper advocates and enable immediate whataboutism to mire any public debate that might get too moral in tone.
A few more images scrolled past. Lester noticed the eyes seemed fuzzy on a few of the images and pressed ‘r’ to send them back for refining. Possibly a degradation from too many encoding changes, but more likely these were actually fakes and this particular generation algorithm hadn’t been tested enough. For legal reasons he wasn’t supposed to know anything about that: officially these were images provided by anonymous user submissions, ostensibly local activists and journalists.
An alert popped up indicating a sentiment change. He switched over to the analysis dashboard and saw a few bars shifting down and blinking red - a few new hashtags had started around claims the new school was forcibly indoctrinating children in foreign ideologies.
Maybe blowing up the school will even help local sentiments towards us, Lester thought.
He glanced over the rest of the dashboard - things looked mostly static, and too high. The social media data and analyses feeding this dashboard were deeply flawed: the locals knew which platforms were properly monitored and which weren’t - only the suicidal insurgents ever posted anything truly negative on those channels. A recent patch had managed to get eyes on most of the remaining platforms. Unfortunately, that didn’t help much because all the analysis models were based on a national media dataset. The model worked great for the capital and denser population in the east, but may as well have been a model learned from 12th century monastic dialogues for analyzing posts in the western half of the country. Some of the predictive values had actually been so bad, Lester was getting the most utility out of them by flipping their decisions because they were reliably wrong more often than right. Several of the local translators routinely used the unsensitive analyses as the basis for new jokes.