Interactive and AI-supported systems engage audiences so they can freely explore, interact and gain enlightenment about risks of all kinds free from the drawbacks of social media.

Engagement with the public on issues of great importance—societal risks of all kinds—is often lacking. When there is a credible information gap, it will be filled by someone else. What is more, the days of audiences accepting one-way, risk information is coming to an end. Audience preferences for reading from screens, moving from link to link, from text to visual, audio and video in a non-linear fashion is irreversible. Information about risks needs to be presented in this way to allow systematic interaction and feedback. This is to the great advantage of risk communication practitioners to tailor and adjust messages and communication processes to facilitate the true definition of risk communication, a process of 2-way exchange of opinions on risks and risk management.

One of the guiding principles of risk communication practice is that risk communication and risk management should be seen as parallel activities that complement each other. The on-going processes, that run continuously, include processes of review, framing and revision. This iterative cycle allows audiences’ values and preferences to guide the exchange, but also to provide them with insight into the process of risk management and the entire risk analysis framework. This process goes a long way to addressing the core complaint about agency risk communication to publics, that it is top-down, unresponsive and lacking in empathy.

As noted in parts 1 and 2, the interactive imperative makes online channels the ideal platform. Furthermore, these channels are scalable, and this solves issues of sheer numbers, the hundreds of millions of people in Asia-Pacific who want to know more about the threat of Type II diabetes for example. Sounds perfect, so then why haven’t institutions and agencies embraced online platforms much beyond limited forays on FaceBook and Twitter?

The immediacy of these platforms is both a blessing and problematic, particularly in institutional settings. In the food arena, Rutstaert et al., 2013 note that while food regulators are generally willing to have a social media presence, they may not engage with it fully. It is reported that they associate reluctance with fear of losing control of information leading to the potential damage to reputation and distrust of food regulation. Furthermore, the speed of information exchange creates expectations of continuous and instant information that cannot always be met within the bureaucratic structures in which food regulators work (Panagiotopoulos et al., 2013).

The latter concern is salient in institutions with limited resources, those with tight policies and procedures on the release of information and where micro-management and rigid hierarchical structures are in place. Loss of control and reputation are concerns also cited by firms as one of the liabilities of social media. This is naturally balanced by the capacity for direct and instantaneous communication with customers for a range of purposes. I think institutes in particular are struggling with social media generally, and continue to use it as a media for one-way communication, see Chapman et al. (2014). This fits with the policy outlook of agencies and institutions who still adhere to the sender-receiver model (Shannon, 1948) of risk communication. It may have originated 70 years ago, but is is better than no functioning risk communication process at all—the “risk comm vacuum”—which many organisations appear to subscribe to.

If social media is confounding risk communicators, what is the hope for AI-based online solutions?

The concerns about social media are noted, the credibility of the platform is continually being unraveled. Without question it serves a purpose, and whilst being theoretically ideal for risk communication it may be more suited towards crisis communication, its immediacy being of most utility.

There are alternatives to social media or static HTML websites that offer the interactivity and scale of social media, with the control and credibility that organisations of all kinds are concerned about. These originate in the hypertext interactive systems that were developed in the US in the 60s and run to contextual knowledge management systems that can employ AI to make them genuine collaborators in risk communication.

Let’s take a specific example of food safety, and the pressing need to tackle foodborne disease in the domestic setting. This is a problem of consumer handling and behaviour and is also solved by changes to that very behaviour. However, these changes in behaviour are not straightforward to initiate or maintain. If it were a matter of rational choice, more than 40% of foodborne disease cases would be eliminated by a few changes to consumer handling in the home. Many campaigns that relied on one-way information failed, they were completely ignored by audiences who either believed that they were invulnerable to such incidents as they already followed best practices (so-called illusion of control or optimistic bias) and the information was intended for others, less knowledgeable or skilful than themselves. In fact, most consumers still think that foodborne disease originates elsewhere, via food processing or at retail. For a simple risk issue with such huge impact, efforts to date could be classed as failures.

Interactive experiences, from hypertext narratives to fully fledged AI-supported information systems driven by algorithmic capabilities are the way forward. The beauty of these systems is that they make information personally relevant and appealing. Even a simple webpage application that will dynamically output information tailored to a very specific audience segment (based upon specific user input), allowing the user to follow links and develop a narrative, will be far more effective that a one-size fits all catalogue of model food handling behaviours. That type of engagement will never be possible through an FAQ or static PDF document that lists optimum storage, time and temperature or hygiene procedures. These web applications can be developed at minimal cost and with the most basic logic (via JS or JAVA) can produce an information environment for audiences way beyond a social media interaction without any concerns of control, approval of messages or the worry that campaigns can be co-opted by others for ‘nefarious’ purposes.

At the other end we are developing AI-integrated contextual knowledge management systems that appear to the user as chatbots. Through a combination of natural user input and selection of choices, the ‘interactor’ can quickly get information they need whatever the nature of the request. The difference is the content, the tailoring of information and ability to learn so called “ideal responses.” For routine information, what we may term simple risk problems, these systems can provide a far more suitable solution than trying to man a social media channel. The use of bots is nonconsummable, use by one person does not use up the bot nor prevent its use by others. In fact, the more people that use the system the more it can learn thresholds of responses. The systems are available 24/7 and offer customised, integrative support and information through short conversations.

If this sounds like science-fiction, the reality is that the foundations for natural language interaction between man and machine originated with ELIZA, a computer program developed in the 1960s (Weizenbaum, 1966). It is simply a matter of applying these technologies in ways that address pressing issues of behaviour change and risk communication in addition to the platform where most of the commercial development has occurred, in customer services.

Basic interactive online systems can improve risk communication and behaviour change in a wide range of fields and do so at scale and at reasonable cost. AI-enhancements can be scaled to improve user experience to fully integrate with existing informational and multi-platform support systems in place of or in augmenting existing risk communication efforts. The challenges of integrating social sciences into AI systems are significant but key to developing machines that act in accordance with human values.

In part IV, we will look at how AI and interactive experiences can help meet needs in risk communication for health promotion and advance the cause of developing socially just applications for AI.

Andrew Roberts

Author Andrew Roberts

More posts by Andrew Roberts

Let us know what you think