Banner image by Traci Folse via flickr
Reported by Alex Wendland, CSaP Policy Intern
CSaP convened a roundtable discussion for the Department for Digital, Media, Culture, and Sport to explore issues around disinformation and online manipulation. Academics from the University of Cambridge joined the discussion which looked at the social impact of new and emerging technologies, and how new technologies affect our decision-making processes.
Online manipulation is defined to be the use of human psychological weaknesses to redirect behaviour. How good are you at spotting fake news? Do the targeted ads make you buy more? People tend to think they are wised up to these techniques, but the dreaded Dunning-Kruger effect suggests confidence and competence aren’t necessarily correlated. A lot of money changes hands on the premise these tools work at editing our behaviours. However, the data on how susceptible people are to these methods hasn’t been well-established.
Currently we live in an era of free-flowing information, instant messaging, and spontaneous translation. This has quickly become the norm, but is this good for society or us personally? With public figures receiving unfiltered abuse and the worry of people being groomed or tricked on the internet, this question is starting to weigh more heavily. Would you like to be warned if a message you received might be harmful, or opt into safe search to avoid false information? Without slipping into a George Orwell surveillance state there may be a middle ground to be explored.
The steam train of Artificial Intelligence (AI) is only building up speed, you can see this with the quirky Face Merging tools and the marvels of AlphaZero. The fuel for this train is data, which is produced for media companies as we scroll through our home screens and visit various sites.
How often do we read the terms and conditions for the cookies on sites? Most of us don't; the time against reward pay off isn’t worth it. This raises a question about what informed consent means in this area. Should we know what happens to our data, where it is being collected and who it is being sold to? Would knowing this change our behaviour?
Currently these markets are self-regulated, relying upon political and public pressure for companies to do the right thing. Regulation is coming however, though the form it should take is unknown. Regulating individual technologies means the government is constantly chasing the latest innovation, so a more outcomes-based approach is needed. The regulation should be practical, enforceable, and evolving with the times. There is a very real fear that bad regulation is worse than no regulation, giving companies the visage of complying with the rule whilst achieving no real outcomes.