{"id":21313,"date":"2022-12-22T16:58:44","date_gmt":"2022-12-22T15:58:44","guid":{"rendered":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/?post_type=purple_issue&#038;p=21313"},"modified":"2023-01-03T11:23:26","modified_gmt":"2023-01-03T10:23:26","slug":"dr-kate-darling-can-artificial-agents-be-trusted","status":"publish","type":"post","link":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/2022\/12\/22\/dr-kate-darling-can-artificial-agents-be-trusted\/","title":{"rendered":"Dr Kate Darling: Can artificial agents be trusted?"},"content":{"rendered":"\n<h5 class=\"has-text-align-center sans-serif article-full-subhead\">COMMENT <\/h5>\n\n<h2 class=\"has-text-align-center has-ccp-secondary-color has-text-color\">DR KATE DARLING:<\/h2>\n\n<h2 class=\"has-text-align-center has-ccp-brown-color has-text-color\">CAN ARTIFICIAL AGENTS BE TRUSTED?<\/h2>\n\n<h4 class=\"has-text-align-center\">AI systems can now generate and use language that feels like a real conversation. But to whose benefit? <\/h4>\n\n<figure class=\"no-tts wp-block-image article-in-image photo\"><img loading=\"lazy\" width=\"1680\" height=\"2048\" src=\"https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2022\/12\/abc29850-cfee-42c8-adbf-fd498d032931.jpg\" alt=\"\" class=\"no-tts wp-image-21312\" srcset=\"https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2022\/12\/abc29850-cfee-42c8-adbf-fd498d032931.jpg 1680w, https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2022\/12\/abc29850-cfee-42c8-adbf-fd498d032931-246x300.jpg 246w, https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2022\/12\/abc29850-cfee-42c8-adbf-fd498d032931-840x1024.jpg 840w, https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2022\/12\/abc29850-cfee-42c8-adbf-fd498d032931-768x936.jpg 768w, https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2022\/12\/abc29850-cfee-42c8-adbf-fd498d032931-1260x1536.jpg 1260w\" sizes=\"(max-width: 1680px) 100vw, 1680px\" \/><\/figure>\n\n<p class=\"has-drop-cap article-full-body sans-serif dropcap\">I magine being able to chat with an artificial intelligence (AI) about anything. What if you could discuss last night\u2019s game, get relationship advice, or have a deep philosophical debate with the virtual assistant on your kitchen counter? Large language models \u2013 AIs capable of this level of communication \u2013 being developed by companies such as OpenAI, Google and Meta are advancing quickly enough to make this our new reality. But can we trust what artificial agents have to say? <\/p>\n\n<p class=\"article-full-body sans-serif\">There are many reasons to embrace conversational AI. The potential uses in health, education, research and commercial spaces are mind-boggling. Virtual characters can\u2019t replace human contact, but mental health research suggests that having something to talk to, whether real or not, can help people. Besides, it\u2019s <span>fun to banter with an artificial agent that can access all of the knowledge on the internet.<\/span><\/p>\n\n<p class=\"article-full-body sans-serif\">But conversational agents also raise ethical issues. Transparency is one: people may want to know whether they\u2019re talking to a human or a machine. This makes sense, but it\u2019s probably also fairly easy to address in contexts where it matters. The bigger problem with these systems is that we trust them more than we should. <\/p>\n\n<p class=\"article-full-body sans-serif\">As conversational agents become more compelling, will we rely on them for information? We may even grow fond of the artificial characters in our daily lives. There\u2019s a large body of research in human-computer and human-robot interaction which shows that people will reciprocate social gestures, disclose personal information, and are even willing to shift their beliefs or behaviour for an artificial agent. This kind of social trust makes us vulnerable to emotional persuasion and requires careful design, as well as rules to ensure that these systems aren\u2019t used against people. <\/p>\n\n<p class=\"article-full-body sans-serif\">A chatbot may, for instance, blurt out racist terms, provide false health information, or instruct your child to touch a live electrical plug with a coin \u2013 all of which have happened. The newest language models are trained on vast amounts of text found on the internet, including toxic and misleading content. These models also use neural networks, which means that instead of following rules (\u2018if the input is x, respond y\u2019), they create an output by learning from a mishmash of examples in a way that is harder to understand or control. <\/p>\n\n<blockquote class=\"wp-block-quote is-style-large\"><p>\u201cWe need to ask who creates and owns large language models, who deploys the devices that use them, and whom this harms or benefits\u201d <\/p><\/blockquote>\n\n<p class=\"article-full-body sans-serif\">As it becomes more difficult to anticipate a language model\u2019s responses, there\u2019s more risk of unintended consequences. For example, if people ask their home assistant how to deal with a medical emergency, invest their money, or whether it\u2019s okay to be gay, the wrong kinds of answers can be harmful. <\/p>\n\n<p class=\"article-full-body sans-serif\">The information that people may share with their devices is also troubling. When computer scientist Joseph Weizenbaum created a simple chatbot called ELIZA in the 1960s, he was surprised to find that people would give it personal information that they wouldn\u2019t disclose to him, even though he had access to the chat data. Similarly, people may tell artificial agents their secrets, without thinking about whether and how companies may collect and mine that information. <\/p>\n\n<p class=\"article-full-body sans-serif\">We need to ask who creates and owns large language models, who deploys the devices that use them, and whom this harms or benefits. Personal information that people reveal in conversation can be used for and against them. The idea of having an artificial agent who remembers things about you is appealing, but it could also allow others to manipulate you and your loved ones. One day, your home assistant could alert you to a deal on a car you can\u2019t really afford, at a time when it knows you\u2019re emotionally most likely to buy it. Imagine if it gave you selective political information, or asked for a software upgrade it knows you\u2019re willing to spend money on. <\/p>\n\n<p class=\"article-full-body sans-serif\">If that sounds dystopian, consider that we\u2019re almost there. Last week, the Amazon Echo in my kitchen tried to sell my kids an \u2018extreme fart package\u2019. Advertisers invest massive amounts every year to reach children and teenagers, and recent research I collaborated on with MIT PhD students Anastasia Ostrowski and Daniella DiPaola indicates that kids are confused about the role of companies when an artificial agent advertises products to them \u2013 a very near-future consumer protection issue. <\/p>\n\n<p class=\"article-full-body sans-serif\">It\u2019s wild to think that my kids will not remember a time before we could talk to machines. As we enter this reality, it\u2019s both exciting and concerning to imagine the different paths this era could take. As people begin to forge relationships with, and perhaps even demand rights for, artificial agents, we need to ensure that this technology is designed and used responsibly. After all, humanity deserves protection, too. <\/p>\n\n<div class=\"no-tts wp-block-image article-in-image photo\"><figure class=\"no-tts alignleft is-resized\"><img src=\"https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2022\/10\/fab340ff-382f-496d-a916-5707bd6a6f20.jpg\" alt=\"\" class=\"no-tts wp-image-19061\" width=\"80\" srcset=\"https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2022\/10\/fab340ff-382f-496d-a916-5707bd6a6f20.jpg 280w, https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2022\/10\/fab340ff-382f-496d-a916-5707bd6a6f20-242x300.jpg 242w\" sizes=\"(max-width: 280px) 100vw, 280px\" \/><\/figure><\/div>\n\n<h5 class=\"sans-serif article-subhead has-ccp-secondary-color has-text-color\">DR KATE <span>DARLING<\/span><\/h5>\n\n<p class=\"article-full-body sans-serif\"><em>(<a href=\"https:\/\/twitter.com\/grok_\">@grok_<\/a>) Kate is a research scientist at the MIT Media Lab, studying human-robot interaction. Her book is <\/em>The New Breed <em>(\u00a320, Penguin).<\/em><\/p>\n\n<div style=\"height:100px\" aria-hidden=\"true\" class=\"no-tts wp-block-spacer\"><\/div>\n\n<p class=\"footer\">ILLUSTRATION: VALENTIN TKACH<\/p>\n","protected":false},"excerpt":{"rendered":"<p>COMMENT <\/p>\n","protected":false},"author":24,"featured_media":21312,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"ub_ctt_via":"","purple_page_number":"36","purple_custom_meta_purple_page_number":"36","purple_seq_number":"1","purple_custom_meta_purple_seq_number":"1","purple_source_article":"article_36-1.xml","purple_custom_meta_purple_source_article":"article_36-1.xml","purple_source_issue":"New-Year-2023","purple_custom_meta_purple_source_issue":"New-Year-2023","purple_external_id":"New-Year-2023-36-1","purple_custom_meta_purple_external_id":"New-Year-2023-36-1","purple_issue_code":"|0000089662||","purple_custom_meta_purple_issue_code":"|0000089662||","purple_android_product":"com.focus.magazine.issue386","purple_custom_meta_purple_android_product":"com.focus.magazine.issue386","purple_ios_product":"com.focus.magazine.issue386","purple_custom_meta_purple_ios_product":"com.focus.magazine.issue386","purple_web_product":"","purple_custom_meta_purple_web_product":"","purple_publication_id":"0f422ad1-c939-476d-9f82-a410052ad4c3","purple_migrated":"","kt_blocks_editor_width":"","apple_news_api_created_at":"2022-12-22T16:01:13Z","apple_news_article-theme":"","apple_news_api_id":"2209a583-9a55-4ca5-99fe-2c6c6e98e722","apple_news_api_modified_at":"2023-01-03T10:23:31Z","apple_news_api_revision":"AAAAAAAAAAAAAAAAAAAABQ==","apple_news_api_share_url":"https:\/\/apple.news\/AIgmlg5pVTKWZ_ixsbpjnIg","apple_news_coverimage":0,"apple_news_coverimage_caption":"","apple_news_is_hidden":false,"apple_news_is_paid":true,"apple_news_is_preview":true,"apple_news_is_sponsored":false,"apple_news_maturity_rating":"","apple_news_pullquote":"","apple_news_pullquote_position":"","apple_news_article_theme":"","apple_news_sections":"[]"},"categories":[25],"tags":[15],"apple_news_notices":[],"featured_image_src":"https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2022\/12\/abc29850-cfee-42c8-adbf-fd498d032931.jpg","author_info":{"display_name":"importmanagerhub@sprylab.com","author_link":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/author\/importmanagerhubsprylab-com\/"},"acf":{"readingTimeMinutes":"5","apple_news_title":""},"uagb_featured_image_src":{"full":["https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2022\/12\/abc29850-cfee-42c8-adbf-fd498d032931.jpg",1680,2048,false],"thumbnail":["https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2022\/12\/abc29850-cfee-42c8-adbf-fd498d032931-150x150.jpg",150,150,true],"medium":["https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2022\/12\/abc29850-cfee-42c8-adbf-fd498d032931-246x300.jpg",246,300,true],"medium_large":["https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2022\/12\/abc29850-cfee-42c8-adbf-fd498d032931-768x936.jpg",768,936,true],"large":["https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2022\/12\/abc29850-cfee-42c8-adbf-fd498d032931-840x1024.jpg",800,975,true],"1536x1536":["https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2022\/12\/abc29850-cfee-42c8-adbf-fd498d032931-1260x1536.jpg",1260,1536,true],"2048x2048":["https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2022\/12\/abc29850-cfee-42c8-adbf-fd498d032931.jpg",1680,2048,false]},"uagb_author_info":{"display_name":"importmanagerhub@sprylab.com","author_link":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/author\/importmanagerhubsprylab-com\/"},"uagb_comment_info":0,"uagb_excerpt":"COMMENT","_links":{"self":[{"href":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/wp-json\/wp\/v2\/posts\/21313"}],"collection":[{"href":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/wp-json\/wp\/v2\/users\/24"}],"replies":[{"embeddable":true,"href":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/wp-json\/wp\/v2\/comments?post=21313"}],"version-history":[{"count":13,"href":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/wp-json\/wp\/v2\/posts\/21313\/revisions"}],"predecessor-version":[{"id":22204,"href":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/wp-json\/wp\/v2\/posts\/21313\/revisions\/22204"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/wp-json\/wp\/v2\/media\/21312"}],"wp:attachment":[{"href":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/wp-json\/wp\/v2\/media?parent=21313"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/wp-json\/wp\/v2\/categories?post=21313"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/wp-json\/wp\/v2\/tags?post=21313"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}