CINXE.COM

GOV.UK

<!DOCTYPE html> <!--[if lt IE 9]><html class="lte-ie8 govuk-template" lang="en"><![endif]--><!--[if gt IE 8]><!--><html class="govuk-template" lang="en"> <!--<![endif]--> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <meta property="og:description" content=""> <meta property="og:title" content="A pro-innovation approach to AI regulation: government response"> <meta property="og:url" content="https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response"> <meta property="og:type" content="article"> <meta property="og:site_name" content="GOV.UK"> <meta name="twitter:card" content="summary"> <meta name="govuk:organisations" content="&lt;D1381&gt;"> <meta name="govuk:primary-publishing-organisation" content="Department for Science, Innovation and Technology"> <meta name="govuk:public-updated-at" content="2024-02-06T10:32:06+00:00"> <meta name="govuk:updated-at" content="2025-02-14T09:44:45+00:00"> <meta name="govuk:first-published-at" content="2024-02-06T10:32:07+00:00"> <meta name="govuk:content-id" content="df65c7d3-a96f-440c-ab78-b72b47f55687"> <meta name="govuk:schema-name" content="html_publication"> <meta name="govuk:rendering-app" content="government-frontend"> <meta name="govuk:publishing-app" content="whitehall"> <meta name="govuk:format" content="html_publication"> <meta charset="utf-8"> <title lang="en"> A pro-innovation approach to AI regulation: government response - GOV.UK </title> <script src="/assets/static/govuk_publishing_components/vendor/lux/lux-measurer-82252e26ba933f84114f589088bc3b02c80266d0c7def823237dfc787fa08797.js" async="async"></script> <script src="/assets/static/govuk_publishing_components/rum-custom-data-9c14c4b4435d3baf55557cb0682d3571384ff85890e520259574d246d9b99876.js" type="module"></script> <script src="/assets/static/govuk_publishing_components/rum-loader-a65b10e18ceeba3bd8a2eac507c7f2c513cdc82f35097df903fdea87f1dc2e33.js" async="async" data-lux-reporter-script="/assets/static/govuk_publishing_components/vendor/lux/lux-reporter-2a03a3e9b4ea62a808443860340a019734da94adf72e451c2be3f877154cf48b.js"></script> <meta name="govuk:components_gem_version" content="51.2.1"> <script src="/assets/static/govuk_publishing_components/load-analytics-4816c6ab3f9440fa7ca32fef36f31fe529bdce6949427c768e341f2d7a39167d.js" type="module"></script> <link rel="stylesheet" href="/assets/static/application-022d26fb7c7b169ecf7e69936139a1871233a0de5538e54095b444af3ff5c32d.css" media="all"> <link rel="icon" sizes="48x48" href="/assets/static/favicon-f54816fc15997bd42cd90e4c50b896a1fc098c0c32957d4e5effbfa9f9b35e53.ico"> <link rel="icon" sizes="any" href="/assets/static/favicon-50144c9d83e59584c45b249ad9e9abfdd23689876c33f28457df13bbdd9c8688.svg" type="image/svg+xml"> <link rel="mask-icon" href="/assets/static/govuk-icon-mask-cdf4265165f8d7f9eec54aa2c1dfbb3d8b6d297c5d7919f0313e0836a5804bb6.svg" color="#0b0c0c"> <link rel="apple-touch-icon" href="/assets/static/govuk-icon-180-d2d7399ff2ba05372b6b2018cc67053e458a748cceea1a550d804dbec401e3ed.png"> <meta name="theme-color" content="#0b0c0c"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta property="og:image" content="https://www.gov.uk/assets/static/govuk-opengraph-image-03837e1cec82f217cf32514635a13c879b8c400ae3b1c207c5744411658c7635.png"> <link rel="stylesheet" href="/assets/government-frontend/application-41a98bdb593065304db7249bb8ef6986772eeaeed6626c5ec54e6c4b778c92a3.css" media="all"> <link rel="canonical" href="https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response"> <link rel="stylesheet" href="/assets/government-frontend/views/_html-publication-54fffb27670e6a49b040058204ce99bfcfea7a0f078c0f4de1471ac6e1ee9256.css"> <link rel="stylesheet" href="/assets/government-frontend/govuk_publishing_components/components/_organisation-logo-efdd5b5a0668cea76df3a2999e067ac8fb426ca467d304c92903a4e1587db34d.css"> <link rel="stylesheet" href="/assets/government-frontend/govuk_publishing_components/components/_inverse-header-781fe1432841ac39bcd2273a280b6298e131bf534a551d205e72167ad775e0c5.css"> <link rel="stylesheet" href="/assets/government-frontend/govuk_publishing_components/components/_contents-list-6a71478248cb0bcfcf1f69f8df6d66d36cde9aa8c2880464adb60fcd05256537.css"> <link rel="stylesheet" href="/assets/government-frontend/govuk_publishing_components/components/_print-link-98446ac1b56886e6bd275df832312c44f45ede0264f8d8ecf98750450fb2c0d4.css"> <link rel="stylesheet" href="/assets/government-frontend/govuk_publishing_components/components/_govspeak-html-publication-98430758cf0deb63d80456a6caf973a79f90662df2c9f689394f96250b7a20e3.css"> <link rel="stylesheet" href="/assets/government-frontend/govuk_publishing_components/components/_govspeak-dff356354123d895b14f18c5420d3df050bce18a8c036e7316e2c5c9f22dcfd9.css"> <link rel="stylesheet" href="/assets/government-frontend/components/_back-to-top-8d10a933dd34f1162faf988b7cf81c36c44ba29948683b0cdb2b654df304c32c.css"> <meta name="govuk:rendering-application" content="government-frontend"> </head> <body class="gem-c-layout-for-public govuk-template__body"> <script nonce="5/ZjuHdQNbqS5S7VIOMX9A=="> //<![CDATA[ document.body.className += ' js-enabled' + ('noModule' in HTMLScriptElement.prototype ? ' govuk-frontend-supported' : ''); //]]> </script> <div id="global-cookie-message" data-module="cookie-banner" data-nosnippet="" aria-label="Cookies on GOV.UK" class="gem-c-cookie-banner govuk-clearfix govuk-cookie-banner js-banner-wrapper" role="region" hidden="hidden"> <div class="govuk-cookie-banner__message govuk-width-container"> <div class="govuk-grid-row"> <div class="govuk-grid-column-two-thirds"> <h2 class="govuk-cookie-banner__heading govuk-heading-m">Cookies on GOV.UK</h2> <div tabindex="-1" class="govuk-cookie-banner__content gem-c-cookie-banner__confirmation"> <span class="gem-c-cookie-banner__content"><p class="govuk-body">We use some essential cookies to make this website work.</p> <p class="govuk-body">We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.</p> <p class="govuk-body">We also use cookies set by other sites to help us deliver content from their services.</p></span> <p class="gem-c-cookie-banner__confirmation-message--accepted govuk-body" hidden data-ga4-cookie-banner data-module="ga4-link-tracker" data-ga4-track-links-only data-ga4-set-indexes data-ga4-link='{"event_name":"navigation","type":"cookie banner","section":"You have accepted additional cookies"}'>You have accepted additional cookies. <span class="gem-c-cookie-banner__confirmation-message">You can <a class="govuk-link" href="/help/cookies">change your cookie settings</a> at any time.</span></p> <p class="gem-c-cookie-banner__confirmation-message--rejected govuk-body" hidden>You have rejected additional cookies. <span class="gem-c-cookie-banner__confirmation-message">You can <a class="govuk-link" href="/help/cookies">change your cookie settings</a> at any time.</span></p> </div> </div> </div> <div class="js-confirmation-buttons govuk-button-group"> <button class="gem-c-button govuk-button" type="submit" data-accept-cookies="true" data-cookie-types="all">Accept additional cookies</button> <button class="gem-c-button govuk-button" type="submit" data-reject-cookies="true">Reject additional cookies</button> <a class="govuk-link" href="/help/cookies">View cookies</a> </div> <div hidden class="js-hide-button govuk-button-group"> <button class="gem-c-cookie-banner__hide-button govuk-button" data-hide-cookie-banner="true" data-module="ga4-event-tracker" data-ga4-event='{"event_name":"select_content","type":"cookie banner","action":"closed","section":"You have accepted additional cookies"}'> Hide this message </button> </div> </div> </div> <a data-module="govuk-skip-link" class="gem-c-skip-link govuk-skip-link govuk-!-display-none-print" href="#content">Skip to main content</a> <header data-module="ga4-event-tracker ga4-link-tracker" data-ga4-expandable="" class="gem-c-layout-super-navigation-header"> <div class="gem-c-layout-super-navigation-header__container govuk-clearfix"> <div class="govuk-width-container"> <div class="gem-c-layout-super-navigation-header__header-logo"> <a class="govuk-header__link govuk-header__link--homepage" data-ga4-link='{"event_name":"navigation","type":"header menu bar","external":"false","text":"GOV.UK","section":"Logo","index_link":1,"index_section":0,"index_section_count":2,"index_total":1}' id="logo" aria-label="Go to the GOV.UK homepage" href="https://www.gov.uk"> <svg focusable="false" role="img" class="govuk-header__logotype" xmlns="http://www.w3.org/2000/svg" viewbox="0 0 148 30" height="30" width="148" aria-label="GOV.UK"> <title>GOV.UK</title> <path d="M22.6 10.4c-1 .4-2-.1-2.4-1-.4-.9.1-2 1-2.4.9-.4 2 .1 2.4 1s-.1 2-1 2.4m-5.9 6.7c-.9.4-2-.1-2.4-1-.4-.9.1-2 1-2.4.9-.4 2 .1 2.4 1s-.1 2-1 2.4m10.8-3.7c-1 .4-2-.1-2.4-1-.4-.9.1-2 1-2.4.9-.4 2 .1 2.4 1s0 2-1 2.4m3.3 4.8c-1 .4-2-.1-2.4-1-.4-.9.1-2 1-2.4.9-.4 2 .1 2.4 1s-.1 2-1 2.4M17 4.7l2.3 1.2V2.5l-2.3.7-.2-.2.9-3h-3.4l.9 3-.2.2c-.1.1-2.3-.7-2.3-.7v3.4L15 4.7c.1.1.1.2.2.2l-1.3 4c-.1.2-.1.4-.1.6 0 1.1.8 2 1.9 2.2h.7c1-.2 1.9-1.1 1.9-2.1 0-.2 0-.4-.1-.6l-1.3-4c-.1-.2 0-.2.1-.3m-7.6 5.7c.9.4 2-.1 2.4-1 .4-.9-.1-2-1-2.4-.9-.4-2 .1-2.4 1s0 2 1 2.4m-5 3c.9.4 2-.1 2.4-1 .4-.9-.1-2-1-2.4-.9-.4-2 .1-2.4 1s.1 2 1 2.4m-3.2 4.8c.9.4 2-.1 2.4-1 .4-.9-.1-2-1-2.4-.9-.4-2 .1-2.4 1s0 2 1 2.4m14.8 11c4.4 0 8.6.3 12.3.8 1.1-4.5 2.4-7 3.7-8.8l-2.5-.9c.2 1.3.3 1.9 0 2.7-.4-.4-.8-1.1-1.1-2.3l-1.2 4c.7-.5 1.3-.8 2-.9-1.1 2.5-2.6 3.1-3.5 3-1.1-.2-1.7-1.2-1.5-2.1.3-1.2 1.5-1.5 2.1-.1 1.1-2.3-.8-3-2-2.3 1.9-1.9 2.1-3.5.6-5.6-2.1 1.6-2.1 3.2-1.2 5.5-1.2-1.4-3.2-.6-2.5 1.6.9-1.4 2.1-.5 1.9.8-.2 1.1-1.7 2.1-3.5 1.9-2.7-.2-2.9-2.1-2.9-3.6.7-.1 1.9.5 2.9 1.9l.4-4.3c-1.1 1.1-2.1 1.4-3.2 1.4.4-1.2 2.1-3 2.1-3h-5.4s1.7 1.9 2.1 3c-1.1 0-2.1-.2-3.2-1.4l.4 4.3c1-1.4 2.2-2 2.9-1.9-.1 1.5-.2 3.4-2.9 3.6-1.9.2-3.4-.8-3.5-1.9-.2-1.3 1-2.2 1.9-.8.7-2.3-1.2-3-2.5-1.6.9-2.2.9-3.9-1.2-5.5-1.5 2-1.3 3.7.6 5.6-1.2-.7-3.1 0-2 2.3.6-1.4 1.8-1.1 2.1.1.2.9-.3 1.9-1.5 2.1-.9.2-2.4-.5-3.5-3 .6 0 1.2.3 2 .9l-1.2-4c-.3 1.1-.7 1.9-1.1 2.3-.3-.8-.2-1.4 0-2.7l-2.9.9C1.3 23 2.6 25.5 3.7 30c3.7-.5 7.9-.8 12.3-.8m28.3-11.6c0 .9.1 1.7.3 2.5.2.8.6 1.5 1 2.2.5.6 1 1.1 1.7 1.5.7.4 1.5.6 2.5.6.9 0 1.7-.1 2.3-.4s1.1-.7 1.5-1.1c.4-.4.6-.9.8-1.5.1-.5.2-1 .2-1.5v-.2h-5.3v-3.2h9.4V28H55v-2.5c-.3.4-.6.8-1 1.1-.4.3-.8.6-1.3.9-.5.2-1 .4-1.6.6s-1.2.2-1.8.2c-1.5 0-2.9-.3-4-.8-1.2-.6-2.2-1.3-3-2.3-.8-1-1.4-2.1-1.8-3.4-.3-1.4-.5-2.8-.5-4.3s.2-2.9.7-4.2c.5-1.3 1.1-2.4 2-3.4.9-1 1.9-1.7 3.1-2.3 1.2-.6 2.6-.8 4.1-.8 1 0 1.9.1 2.8.3.9.2 1.7.6 2.4 1s1.4.9 1.9 1.5c.6.6 1 1.3 1.4 2l-3.7 2.1c-.2-.4-.5-.9-.8-1.2-.3-.4-.6-.7-1-1-.4-.3-.8-.5-1.3-.7-.5-.2-1.1-.2-1.7-.2-1 0-1.8.2-2.5.6-.7.4-1.3.9-1.7 1.5-.5.6-.8 1.4-1 2.2-.3.8-.4 1.9-.4 2.7zM71.5 6.8c1.5 0 2.9.3 4.2.8 1.2.6 2.3 1.3 3.1 2.3.9 1 1.5 2.1 2 3.4s.7 2.7.7 4.2-.2 2.9-.7 4.2c-.4 1.3-1.1 2.4-2 3.4-.9 1-1.9 1.7-3.1 2.3-1.2.6-2.6.8-4.2.8s-2.9-.3-4.2-.8c-1.2-.6-2.3-1.3-3.1-2.3-.9-1-1.5-2.1-2-3.4-.4-1.3-.7-2.7-.7-4.2s.2-2.9.7-4.2c.4-1.3 1.1-2.4 2-3.4.9-1 1.9-1.7 3.1-2.3 1.2-.5 2.6-.8 4.2-.8zm0 17.6c.9 0 1.7-.2 2.4-.5s1.3-.8 1.7-1.4c.5-.6.8-1.3 1.1-2.2.2-.8.4-1.7.4-2.7v-.1c0-1-.1-1.9-.4-2.7-.2-.8-.6-1.6-1.1-2.2-.5-.6-1.1-1.1-1.7-1.4-.7-.3-1.5-.5-2.4-.5s-1.7.2-2.4.5-1.3.8-1.7 1.4c-.5.6-.8 1.3-1.1 2.2-.2.8-.4 1.7-.4 2.7v.1c0 1 .1 1.9.4 2.7.2.8.6 1.6 1.1 2.2.5.6 1.1 1.1 1.7 1.4.6.3 1.4.5 2.4.5zM88.9 28 83 7h4.7l4 15.7h.1l4-15.7h4.7l-5.9 21h-5.7zm28.8-3.6c.6 0 1.2-.1 1.7-.3.5-.2 1-.4 1.4-.8.4-.4.7-.8.9-1.4.2-.6.3-1.2.3-2v-13h4.1v13.6c0 1.2-.2 2.2-.6 3.1s-1 1.7-1.8 2.4c-.7.7-1.6 1.2-2.7 1.5-1 .4-2.2.5-3.4.5-1.2 0-2.4-.2-3.4-.5-1-.4-1.9-.9-2.7-1.5-.8-.7-1.3-1.5-1.8-2.4-.4-.9-.6-2-.6-3.1V6.9h4.2v13c0 .8.1 1.4.3 2 .2.6.5 1 .9 1.4.4.4.8.6 1.4.8.6.2 1.1.3 1.8.3zm13-17.4h4.2v9.1l7.4-9.1h5.2l-7.2 8.4L148 28h-4.9l-5.5-9.4-2.7 3V28h-4.2V7zm-27.6 16.1c-1.5 0-2.7 1.2-2.7 2.7s1.2 2.7 2.7 2.7 2.7-1.2 2.7-2.7-1.2-2.7-2.7-2.7z"></path> </svg> </a> </div> </div> <nav aria-labelledby="super-navigation-menu-heading" class="gem-c-layout-super-navigation-header__content govuk-!-display-none-print" data-module="super-navigation-mega-menu"> <h2 id="super-navigation-menu-heading" class="govuk-visually-hidden"> Navigation menu </h2> <div class="govuk-width-container gem-c-layout-super-navigation-header__button-width-container"> <div class="gem-c-layout-super-navigation-header__button-container"> <div class="gem-c-layout-super-navigation-header__navigation-item"> <a class="gem-c-layout-super-navigation-header__navigation-item-link" href="/browse"><span class="gem-c-layout-super-navigation-header__navigation-item-link-inner"> Menu </span></a> <button aria-controls="super-navigation-menu" aria-expanded="false" aria-label="Show navigation menu" class="gem-c-layout-super-navigation-header__navigation-top-toggle-button" data-text-for-hide="Hide navigation menu" data-text-for-show="Show navigation menu" data-toggle-desktop-group="top" data-toggle-mobile-group="top" data-tracking-key="menu" data-ga4-event='{"event_name":"select_content","type":"header menu bar","text":"Menu","index_section":1,"index_section_count":2,"section":"Menu"}' hidden="hidden" id="super-navigation-menu-toggle" type="button"> <span class="gem-c-layout-super-navigation-header__navigation-top-toggle-button-inner">Menu</span> </button> </div> <div class="gem-c-layout-super-navigation-header__search-item"> <button id="super-search-menu-toggle" class="gem-c-layout-super-navigation-header__search-toggle-button" aria-controls="super-search-menu" aria-expanded="true" aria-label="Hide search menu" data-text-for-hide="Hide search menu" data-text-for-show="Show search menu" data-toggle-mobile-group="top" data-toggle-desktop-group="top" data-tracking-key="search" data-ga4-event='{"event_name":"select_content","type":"header menu bar","text":"Search","index_section":2,"index_section_count":2,"section":"Search"}' hidden="hidden" type="button"> <span class="govuk-visually-hidden"> Search GOV.UK </span> <svg class="gem-c-layout-super-navigation-header__search-toggle-button-link-icon" width="27" height="27" viewbox="0 0 27 27" fill="none" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" focusable="false"> <circle cx="12.0161" cy="11.0161" r="8.51613" stroke="currentColor" stroke-width="3"></circle> <line x1="17.8668" y1="17.3587" x2="26.4475" y2="25.9393" stroke="currentColor" stroke-width="3"></line> </svg> <span aria-hidden="true" class="gem-c-layout-super-navigation-header__navigation-top-toggle-close-icon" focusable="false"> × </span> </button> <a class="gem-c-layout-super-navigation-header__search-item-link" href="/search"> <span class="govuk-visually-hidden"> Search GOV.UK </span> <svg class="gem-c-layout-super-navigation-header__search-item-link-icon" width="27" height="27" viewbox="0 0 27 27" fill="none" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" focusable="false"> <circle cx="12.0161" cy="11.0161" r="8.51613" stroke="currentColor" stroke-width="3"></circle> <line x1="17.8668" y1="17.3587" x2="26.4475" y2="25.9393" stroke="currentColor" stroke-width="3"></line> </svg> </a> </div> </div> </div> <div id="super-navigation-menu" hidden="hidden" class="gem-c-layout-super-navigation-header__navigation-dropdown-menu"> <div class="govuk-width-container"> <div class="govuk-grid-row gem-c-layout-super-navigation-header__navigation-items"> <div class="govuk-grid-column-two-thirds-from-desktop gem-c-layout-super-navigation-header__column--services-and-information"> <h3 class="govuk-heading-m gem-c-layout-super-navigation-header__column-header"> Services and information </h3> <ul class="gem-c-layout-super-navigation-header__navigation-second-items gem-c-layout-super-navigation-header__navigation-second-items--services-and-information"> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":1,"index_link":1,"index_section_count":3,"index_total":16,"section":"Services and information"}' href="https://www.gov.uk/browse/benefits">Benefits</a> </li> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":1,"index_link":2,"index_section_count":3,"index_total":16,"section":"Services and information"}' href="https://www.gov.uk/browse/births-deaths-marriages">Births, death, marriages and care</a> </li> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":1,"index_link":3,"index_section_count":3,"index_total":16,"section":"Services and information"}' href="https://www.gov.uk/browse/business">Business and self-employed</a> </li> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":1,"index_link":4,"index_section_count":3,"index_total":16,"section":"Services and information"}' href="https://www.gov.uk/browse/childcare-parenting">Childcare and parenting</a> </li> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":1,"index_link":5,"index_section_count":3,"index_total":16,"section":"Services and information"}' href="https://www.gov.uk/browse/citizenship">Citizenship and living in the UK</a> </li> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":1,"index_link":6,"index_section_count":3,"index_total":16,"section":"Services and information"}' href="https://www.gov.uk/browse/justice">Crime, justice and the law</a> </li> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":1,"index_link":7,"index_section_count":3,"index_total":16,"section":"Services and information"}' href="https://www.gov.uk/browse/disabilities">Disabled people</a> </li> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":1,"index_link":8,"index_section_count":3,"index_total":16,"section":"Services and information"}' href="https://www.gov.uk/browse/driving">Driving and transport</a> </li> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":1,"index_link":9,"index_section_count":3,"index_total":16,"section":"Services and information"}' href="https://www.gov.uk/browse/education">Education and learning</a> </li> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":1,"index_link":10,"index_section_count":3,"index_total":16,"section":"Services and information"}' href="https://www.gov.uk/browse/employing-people">Employing people</a> </li> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":1,"index_link":11,"index_section_count":3,"index_total":16,"section":"Services and information"}' href="https://www.gov.uk/browse/environment-countryside">Environment and countryside</a> </li> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":1,"index_link":12,"index_section_count":3,"index_total":16,"section":"Services and information"}' href="https://www.gov.uk/browse/housing-local-services">Housing and local services</a> </li> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":1,"index_link":13,"index_section_count":3,"index_total":16,"section":"Services and information"}' href="https://www.gov.uk/browse/tax">Money and tax</a> </li> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":1,"index_link":14,"index_section_count":3,"index_total":16,"section":"Services and information"}' href="https://www.gov.uk/browse/abroad">Passports, travel and living abroad</a> </li> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":1,"index_link":15,"index_section_count":3,"index_total":16,"section":"Services and information"}' href="https://www.gov.uk/browse/visas-immigration">Visas and immigration</a> </li> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":1,"index_link":16,"index_section_count":3,"index_total":16,"section":"Services and information"}' href="https://www.gov.uk/browse/working">Working, jobs and pensions</a> </li> </ul> </div> <div class="govuk-grid-column-one-third-from-desktop gem-c-layout-super-navigation-header__column--government-activity"> <h3 class="govuk-heading-m gem-c-layout-super-navigation-header__column-header"> Government activity </h3> <ul class="gem-c-layout-super-navigation-header__navigation-second-items gem-c-layout-super-navigation-header__navigation-second-items--government-activity"> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link gem-c-layout-super-navigation-header__navigation-second-item-link--with-description" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":2,"index_link":1,"index_section_count":3,"index_total":6,"section":"Government activity"}' href="https://www.gov.uk/government/organisations">Departments</a> <p class="gem-c-layout-super-navigation-header__navigation-second-item-description">Departments, agencies and public bodies</p> </li> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link gem-c-layout-super-navigation-header__navigation-second-item-link--with-description" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":2,"index_link":2,"index_section_count":3,"index_total":6,"section":"Government activity"}' href="https://www.gov.uk/search/news-and-communications">News</a> <p class="gem-c-layout-super-navigation-header__navigation-second-item-description">News stories, speeches, letters and notices</p> </li> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link gem-c-layout-super-navigation-header__navigation-second-item-link--with-description" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":2,"index_link":3,"index_section_count":3,"index_total":6,"section":"Government activity"}' href="https://www.gov.uk/search/guidance-and-regulation">Guidance and regulation</a> <p class="gem-c-layout-super-navigation-header__navigation-second-item-description">Detailed guidance, regulations and rules</p> </li> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link gem-c-layout-super-navigation-header__navigation-second-item-link--with-description" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":2,"index_link":4,"index_section_count":3,"index_total":6,"section":"Government activity"}' href="https://www.gov.uk/search/research-and-statistics">Research and statistics</a> <p class="gem-c-layout-super-navigation-header__navigation-second-item-description">Reports, analysis and official statistics</p> </li> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link gem-c-layout-super-navigation-header__navigation-second-item-link--with-description" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":2,"index_link":5,"index_section_count":3,"index_total":6,"section":"Government activity"}' href="https://www.gov.uk/search/policy-papers-and-consultations">Policy papers and consultations</a> <p class="gem-c-layout-super-navigation-header__navigation-second-item-description">Consultations and strategy</p> </li> <li class="gem-c-layout-super-navigation-header__dropdown-list-item"> <a class="govuk-link gem-c-layout-super-navigation-header__navigation-second-item-link gem-c-layout-super-navigation-header__navigation-second-item-link--with-description" data-ga4-link='{"event_name":"navigation","type":"header menu bar","index_section":2,"index_link":6,"index_section_count":3,"index_total":6,"section":"Government activity"}' href="https://www.gov.uk/search/transparency-and-freedom-of-information-releases">Transparency</a> <p class="gem-c-layout-super-navigation-header__navigation-second-item-description">Data, Freedom of Information releases and corporate reports</p> </li> </ul> </div> </div> </div> </div> <div id="super-search-menu" hidden="hidden" class="gem-c-layout-super-navigation-header__navigation-dropdown-menu"> <div class="govuk-width-container gem-c-layout-super-navigation-header__search-container gem-c-layout-super-navigation-header__search-items"> <h3 class="govuk-visually-hidden"> Search </h3> <div class="govuk-grid-row"> <div class="govuk-grid-column-full"> <form class="gem-c-layout-super-navigation-header__search-form" id="search" data-module="ga4-search-tracker" data-ga4-search-type="header menu bar" data-ga4-search-url="/search/all" data-ga4-search-section="Search GOV.UK" data-ga4-search-index-section="3" data-ga4-search-index-section-count="3" action="https://www.gov.uk/search/all" method="get" role="search" aria-label="Site-wide"> <div class="gem-c-search-with-autocomplete gem-c-search-with-autocomplete--large" data-module="gem-search-with-autocomplete" data-source-url="https://www.gov.uk/api/search/autocomplete.json" data-source-key="suggestions"> <div data-module="gem-toggle-input-class-on-focus" class="gem-c-search govuk-!-display-none-print gem-c-search--large gem-c-search--on-white gem-c-search--separate-label govuk-!-margin-bottom-0"> <label for="search-main-06b3ba4a" class="govuk-label govuk-label--m gem-c-layout-super-navigation-header__search-label--large-navbar">Search GOV.UK</label> <div class="gem-c-search__item-wrapper"> <div class="js-search-input-wrapper"> <input enterkeyhint="search" class="gem-c-search__item gem-c-search__input js-class-toggle" id="search-main-06b3ba4a" name="keywords" title="Search" type="search" value="" autocorrect="off" autocapitalize="off"> </div> <div class="gem-c-search__item gem-c-search__submit-wrapper"> <button class="gem-c-search__submit" type="submit" enterkeyhint="search"> Search <svg class="gem-c-search__icon" width="27" height="27" viewbox="0 0 27 27" fill="none" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" focusable="false"> <circle cx="12.0161" cy="11.0161" r="8.51613" stroke="currentColor" stroke-width="3"></circle> <line x1="17.8668" y1="17.3587" x2="26.4475" y2="25.9393" stroke="currentColor" stroke-width="3"></line> </svg> </button> </div> </div> </div> </div> </form> </div> </div> </div> </div> </nav> </div> </header> <div class=""> <div class="gem-c-layout-for-public__blue-bar govuk-width-container"></div> </div> <div id="wrapper" class="direction-ltr govuk-width-container"> <div class="gem-c-contextual-breadcrumbs"> <div class="govuk-!-display-none-print"> <nav data-module="ga4-link-tracker" aria-label="Breadcrumb" class="gem-c-breadcrumbs govuk-breadcrumbs govuk-breadcrumbs--collapse-on-mobile"> <ol class="govuk-breadcrumbs__list"> <li class="govuk-breadcrumbs__list-item"> <a data-ga4-link='{"event_name":"navigation","type":"breadcrumb","index_link":"1","index_total":"5"}' class="govuk-breadcrumbs__link" href="/">Home</a> </li> <li class="govuk-breadcrumbs__list-item"> <a data-ga4-link='{"event_name":"navigation","type":"breadcrumb","index_link":"2","index_total":"5"}' class="govuk-breadcrumbs__link" href="/business-and-industry">Business and industry</a> </li> <li class="govuk-breadcrumbs__list-item"> <a data-ga4-link='{"event_name":"navigation","type":"breadcrumb","index_link":"3","index_total":"5"}' class="govuk-breadcrumbs__link" href="/business-and-industry/science-and-innovation">Science and innovation</a> </li> <li class="govuk-breadcrumbs__list-item"> <a data-ga4-link='{"event_name":"navigation","type":"breadcrumb","index_link":"4","index_total":"5"}' class="govuk-breadcrumbs__link" href="/business-and-industry/artificial-intelligence">Artificial intelligence</a> </li> <li class="govuk-breadcrumbs__list-item"> <a data-ga4-link='{"event_name":"navigation","type":"breadcrumb","index_link":"5","index_total":"5"}' class="govuk-breadcrumbs__link" href="/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals">AI regulation: a pro-innovation approach – policy proposals</a> </li> </ol> </nav> </div> </div> <main role="main" id="content" class="html-publication" lang="en"> <span id="Top"></span> <div class="publication-external"> <ul class="organisation-logos"> <li class="organisation-logos__logo"> <div class="gem-c-organisation-logo brand--department-for-science-innovation-technology"> <a class="gem-c-organisation-logo__container gem-c-organisation-logo__link gem-c-organisation-logo__crest gem-c-organisation-logo__crest--single-identity brand__border-color" href="/government/organisations/department-for-science-innovation-and-technology"> <span class="gem-c-organisation-logo__name">Department for<br>Science, Innovation<br>&amp; Technology</span> </a> </div> </li> </ul> </div> <header class="gem-c-inverse-header govuk-!-padding-top-6 govuk-!-padding-bottom-6"> <div class="gem-c-heading gem-c-heading--inverse govuk-!-margin-bottom-0"> <span class="govuk-caption-xl gem-c-heading__context"> Consultation outcome </span> <h2 class="gem-c-heading__text govuk-heading-xl"> A pro-innovation approach to AI regulation: government response </h2> </div> <p class="gem-c-inverse-header__subtext">Updated 6 February 2024</p> </header> <div id="contents"> <div class="govuk-grid-row gem-print-columns-none"> <div class="govuk-grid-column-one-quarter-from-desktop contents-list-container"> <nav data-module="ga4-link-tracker" aria-label="Contents" class="gem-c-contents-list govuk-!-margin-bottom-4"> <h2 class="gem-c-contents-list__title"> Contents </h2> <ol class="gem-c-contents-list__list"> <li class="gem-c-contents-list__list-item gem-c-contents-list__list-item--numbered"> <span class="gem-c-contents-list__list-item-dash" aria-hidden="true"></span> <a class="gem-c-contents-list__link govuk-link gem-c-force-print-link-styles govuk-link--no-underline" data-ga4-link='{"event_name":"select_content","section":"Contents","type":"contents list","index_total":10,"index_link":1}' href="#ministerial-foreword"><span class="gem-c-contents-list__number">1. </span><span class="gem-c-contents-list__numbered-text">Ministerial foreword</span></a> </li> <li class="gem-c-contents-list__list-item gem-c-contents-list__list-item--numbered"> <span class="gem-c-contents-list__list-item-dash" aria-hidden="true"></span> <a class="gem-c-contents-list__link govuk-link gem-c-force-print-link-styles govuk-link--no-underline" data-ga4-link='{"event_name":"select_content","section":"Contents","type":"contents list","index_total":10,"index_link":2}' href="#executive-summary"><span class="gem-c-contents-list__number">2. </span><span class="gem-c-contents-list__numbered-text">Executive summary</span></a> </li> <li class="gem-c-contents-list__list-item gem-c-contents-list__list-item--numbered"> <span class="gem-c-contents-list__list-item-dash" aria-hidden="true"></span> <a class="gem-c-contents-list__link govuk-link gem-c-force-print-link-styles govuk-link--no-underline" data-ga4-link='{"event_name":"select_content","section":"Contents","type":"contents list","index_total":10,"index_link":3}' href="#glossary"><span class="gem-c-contents-list__number">3. </span><span class="gem-c-contents-list__numbered-text">Glossary</span></a> </li> <li class="gem-c-contents-list__list-item gem-c-contents-list__list-item--numbered"> <span class="gem-c-contents-list__list-item-dash" aria-hidden="true"></span> <a class="gem-c-contents-list__link govuk-link gem-c-force-print-link-styles govuk-link--no-underline" data-ga4-link='{"event_name":"select_content","section":"Contents","type":"contents list","index_total":10,"index_link":4}' href="#introduction"><span class="gem-c-contents-list__number">4. </span><span class="gem-c-contents-list__numbered-text">Introduction</span></a> </li> <li class="gem-c-contents-list__list-item gem-c-contents-list__list-item--numbered"> <span class="gem-c-contents-list__list-item-dash" aria-hidden="true"></span> <a class="gem-c-contents-list__link govuk-link gem-c-force-print-link-styles govuk-link--no-underline" data-ga4-link='{"event_name":"select_content","section":"Contents","type":"contents list","index_total":10,"index_link":5}' href="#a-regulatory-framework-to-keep-pace-with-a-rapidly-advancing-technology"><span class="gem-c-contents-list__number">5. </span><span class="gem-c-contents-list__numbered-text">A regulatory framework to keep pace with a rapidly advancing technology</span></a> </li> <li class="gem-c-contents-list__list-item gem-c-contents-list__list-item--numbered"> <span class="gem-c-contents-list__list-item-dash" aria-hidden="true"></span> <a class="gem-c-contents-list__link govuk-link gem-c-force-print-link-styles govuk-link--no-underline" data-ga4-link='{"event_name":"select_content","section":"Contents","type":"contents list","index_total":10,"index_link":6}' href="#summary-of-consultation-evidence-and-government-response"><span class="gem-c-contents-list__number">6. </span><span class="gem-c-contents-list__numbered-text">Summary of consultation evidence and government response</span></a> </li> <li class="gem-c-contents-list__list-item gem-c-contents-list__list-item--numbered"> <span class="gem-c-contents-list__list-item-dash" aria-hidden="true"></span> <a class="gem-c-contents-list__link govuk-link gem-c-force-print-link-styles govuk-link--no-underline" data-ga4-link='{"event_name":"select_content","section":"Contents","type":"contents list","index_total":10,"index_link":7}' href="#annex-a-method-and-engagement">Annex A: method and engagement</a> </li> <li class="gem-c-contents-list__list-item gem-c-contents-list__list-item--numbered"> <span class="gem-c-contents-list__list-item-dash" aria-hidden="true"></span> <a class="gem-c-contents-list__link govuk-link gem-c-force-print-link-styles govuk-link--no-underline" data-ga4-link='{"event_name":"select_content","section":"Contents","type":"contents list","index_total":10,"index_link":8}' href="#annex-b-list-of-consultation-respondents">Annex B: List of consultation respondents</a> </li> <li class="gem-c-contents-list__list-item gem-c-contents-list__list-item--numbered"> <span class="gem-c-contents-list__list-item-dash" aria-hidden="true"></span> <a class="gem-c-contents-list__link govuk-link gem-c-force-print-link-styles govuk-link--no-underline" data-ga4-link='{"event_name":"select_content","section":"Contents","type":"contents list","index_total":10,"index_link":9}' href="#annex-c-individual-question-summaries">Annex C: Individual question summaries</a> </li> <li class="gem-c-contents-list__list-item gem-c-contents-list__list-item--numbered"> <span class="gem-c-contents-list__list-item-dash" aria-hidden="true"></span> <a class="gem-c-contents-list__link govuk-link gem-c-force-print-link-styles govuk-link--no-underline" data-ga4-link='{"event_name":"select_content","section":"Contents","type":"contents list","index_total":10,"index_link":10}' href="#annex-d-summary-of-impact-assessment-evidence">Annex D: Summary of impact assessment evidence</a> </li> </ol> </nav> <div class="gem-c-print-link govuk-!-display-none-print govuk-!-margin-bottom-6"> <button class="govuk-link govuk-body-s gem-c-print-link__button" data-module="print-link">Print this page</button> </div> </div> <div class="print-wrapper"> <div class="meta-data meta-data--display-print"> <p> <img class="meta-data-licence" src="/assets/government-frontend/open-government-licence-min-93b6a51b518ff99714a1aa2a7d2162735c155ec3cb073c75fb88b2a332fa83d3.png"> </p> <p> © Crown copyright 2024 </p> <p> This publication is licensed under the terms of the Open Government Licence v3.0 except where otherwise stated. To view this licence, visit <a href="https://www.nationalarchives.gov.uk/doc/open-government-licence/version/3">nationalarchives.gov.uk/doc/open-government-licence/version/3</a> or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: <a href="mailto:psi@nationalarchives.gov.uk">psi@nationalarchives.gov.uk</a>. </p> <p> Where we have identified any third party copyright information you will need to obtain permission from the copyright holders concerned. </p> <p> This publication is available at https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response </p> </div> </div> <div class="govuk-grid-column-three-quarters-from-desktop contents-container"> <div class="gem-c-govspeak-html-publication"> <div data-module="govspeak" class="gem-c-govspeak govuk-govspeak gem-c-govspeak--direction-ltr govuk-!-margin-bottom-0"> <div class="govspeak"> <p>Command Paper: CP 1019</p> <p>ISBN: 978-1-5286-4565-2</p> <p>Unique Reference: E03019481 02/24</p> <p>Presented to Parliament by the Secretary of State for Science, Innovation and Technology by Command of His Majesty on 6 February 2024.</p> <h2 id="ministerial-foreword"> <span class="number">1. </span> Ministerial foreword</h2> <figure class="image embedded"><div class="img"><img src="https://assets.publishing.service.gov.uk/media/65af799ffd784b000de0c6bf/sos_minister_michelle_govuk.jpg" alt=""></div> <figcaption><p>The Rt Hon Michelle Donelan MP, Secretary of State for Science, Innovation and Technology.</p></figcaption></figure> <p>The world is on the cusp of an extraordinary new era driven by advances in Artificial Intelligence (<abbr title="artificial intelligence">AI</abbr>). I see the rapid improvements in <abbr title="artificial intelligence">AI</abbr> capabilities as a once-in-a-generation opportunity for the British people to revolutionise our public services for the better and to deliver real, tangible, long-term results for our country.</p> <p>The UK <abbr title="artificial intelligence">AI</abbr> market is predicted to grow to over $1 trillion (<abbr title="United States dollar">USD</abbr>) by 2035<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="govuk-link" rel="footnote">[footnote 1]</a></sup> – unlocking everything from new skills and jobs to once unimaginable life saving treatments for cruel diseases like cancer and dementia. My ambition is for us to revolutionise the way we deliver public services by becoming a global leader in safe <abbr title="artificial intelligence">AI</abbr> development and deployment.</p> <p>We have done more than any government in history to make that a reality, and our plan is working. Last year, we hosted the world’s first <abbr title="artificial intelligence">AI</abbr> Safety Summit, bringing industry, academia, and civil society together with 28 leading <abbr title="artificial intelligence">AI</abbr> nations and the EU to agree the Bletchley Declaration – a landmark commitment to share responsibility on mitigating the risks of frontier <abbr title="artificial intelligence">AI</abbr>, collaborate on safety and research, and to promote its potential as a force for good in this world.</p> <p>We were the first government in the world to formally publish our assessment of the capabilities and risks presented by advanced <abbr title="artificial intelligence">AI</abbr>. Research-driven reports produced by <abbr title="Department for Science, Innovation and Technology">DSIT</abbr> and the Government Office for Science<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="govuk-link" rel="footnote">[footnote 2]</a></sup> laid the groundwork for an international agreement on evaluating the scientific basis for <abbr title="artificial intelligence">AI</abbr> safety.</p> <p>We brought together a powerful consortium of experts in our <abbr title="artificial intelligence">AI</abbr> Safety Institute, the first government-backed organisation of its kind anywhere in the world, committed to advancing <abbr title="artificial intelligence">AI</abbr> safety in the public interest.</p> <p>With the publication of our <abbr title="artificial intelligence">AI</abbr> regulation white paper in March 2023, I wanted to take a bold and considered approach that is strongly pro-innovation and pro-safety. I knew that our approach had to remain agile enough to deal with the unprecedented speed of development, while also remaining robust enough in each sector to address the key concerns around potential societal harms, misuse risks, and autonomy risks that our thought leadership exercises have revealed.</p> <p>This agile, sector-based approach has empowered regulators to create bespoke measures that are tailored to the various needs and risks posed by different sections of our economy. The white paper proposed five clear principles for existing UK regulators to follow, and set out our expectations for responsible <abbr title="artificial intelligence">AI</abbr> innovation.</p> <p>This common sense, pragmatic approach has been welcomed and endorsed both by the companies at the frontier of <abbr title="artificial intelligence">AI</abbr> development and leading <abbr title="artificial intelligence">AI</abbr> safety experts. Google DeepMind, Microsoft, OpenAI and Anthropic all supported the UK’s approach, as did Britain’s budding <abbr title="artificial intelligence">AI</abbr> start-up scene, and many leading voices in academia and civil society.</p> <p>In considering our response to the consultation, I have sought to double-down on this success and drive forward our plans to make Britain the safest and most innovative place to develop and deploy <abbr title="artificial intelligence">AI</abbr> in the world, backed by over £100 million to support <abbr title="artificial intelligence">AI</abbr> innovation and regulation. Building on feedback from the consultation, we have set up a central function to drive coherence in our regulatory approach across government, including by recruiting a new multidisciplinary team to conduct cross-sector risk assessment and monitoring to guard against existing and emerging risks in <abbr title="artificial intelligence">AI</abbr>.</p> <p>With the Digital Regulation Cooperation Forum (<abbr title="Digital Regulation Cooperation Forum">DRCF</abbr>), we have launched the <abbr title="artificial intelligence">AI</abbr> and Digital Hub, a pilot scheme for a brand-new advisory service to support innovation run by expert regulators including <abbr title="Office of Communications">Ofcom</abbr>, the <abbr title="Competition and Markets Authority">CMA</abbr>, the <abbr title="Financial Conduct Authority">FCA</abbr> and the <abbr title="Information Commissioner's Office">ICO</abbr><sup id="fnref:3" role="doc-noteref"><a href="#fn:3" class="govuk-link" rel="footnote">[footnote 3]</a></sup>. We are also investing in new support for regulators to build their practical, technical expertise and backing the launch of nine new research hubs across the UK to harness the power of <abbr title="artificial intelligence">AI</abbr> in everything from mathematics to healthcare.</p> <p>Advancing our thought-leadership on safety, we also lay out the case for a set of targeted, binding requirements on developers of highly capable general-purpose <abbr title="artificial intelligence">AI</abbr> models in the future to ensure that powerful, sophisticated <abbr title="artificial intelligence">AI</abbr> develops in a way which is safe. And our targeted consultations on our cross-economy <abbr title="artificial intelligence">AI</abbr> risk register and monitoring and evaluation framework will engage with leading voices from regulators, academia, civil society, and industry.</p> <p>The <abbr title="artificial intelligence">AI</abbr> Safety Institute’s technical experts will have a crucial role to play here as we develop our approach on the regulation of highly capable general-purpose systems. We will work closely with <abbr title="artificial intelligence">AI</abbr> developers, with academics and civil society members who can provide independent expert perspectives, and also with our international partners ahead of the next <abbr title="artificial intelligence">AI</abbr> Safety Summits in the Republic of Korea and France.</p> <p>Finally, my thinking on the UK’s <abbr title="artificial intelligence">AI</abbr> leadership role goes well beyond the immediate horizon. We will need to lead fields of research that will help us build a more resilient society ready for a world where advanced <abbr title="artificial intelligence">AI</abbr> technology and the means to develop it are widely accessible. That means improving our defensive capabilities against bad actors seeking to use <abbr title="artificial intelligence">AI</abbr> to do harm, it means designing new internet infrastructure for a digital world full of agentic <abbr title="artificial intelligence">AI</abbr> systems, and it also means leveraging <abbr title="artificial intelligence">AI</abbr> to improve critical aspects of our society such as democratic deliberation and consensus. <abbr title="artificial intelligence">AI</abbr> can and must remain a force for the public good, and we will ensure that is the case as we develop our policy approach in this area.</p> <p>This response paper is another clear, decisive step forward for the UK’s ambitions to lead in safe <abbr title="artificial intelligence">AI</abbr> and to be a Science and Technology Superpower by the end of the decade. Whether you are an <abbr title="artificial intelligence">AI</abbr> developer, user, safety researcher or you represent civil society, we all have a shared interest in realising the opportunities of safe <abbr title="artificial intelligence">AI</abbr> development. I am personally driven by a mission to improve the lives of the British people through technology and innovation, and our response paper sets out exactly how that mission will become a reality.</p> <h2 id="executive-summary"> <span class="number">2. </span> Executive summary</h2> <ul> <li> <p>The pace of progress in Artificial Intelligence (<abbr title="artificial intelligence">AI</abbr>) has been unlike any previous technology and the benefits are already being realised across the UK: <abbr title="artificial intelligence">AI</abbr> is helping to make our jobs safer and more satisfying, conserve our wildlife and fight climate change, and make our public services more efficient. Not only do we need to plan for the capabilities and uses of the <abbr title="artificial intelligence">AI</abbr> systems we have today, but we must also prepare for a near future where the most powerful systems are broadly accessible and significantly more capable<sup id="fnref:4" role="doc-noteref"><a href="#fn:4" class="govuk-link" rel="footnote">[footnote 4]</a></sup>.</p> </li> <li> <p>The UK is leading the world in how to respond to this challenge. Our approach to preparing for such a future is firmly pro-innovation. To realise the immense benefits of these technologies, we must ensure <abbr title="artificial intelligence">AI</abbr>’s trustworthiness and public adoption through a strong pro-safety approach. As the Prime Minister set out in a landmark speech in October 2023, “the future of <abbr title="artificial intelligence">AI</abbr> is safe <abbr title="artificial intelligence">AI</abbr>.  And by making the UK a global leader in safe <abbr title="artificial intelligence">AI</abbr>, we will attract even more of the new jobs and investment that will come from this new wave of technology”<sup id="fnref:5" role="doc-noteref"><a href="#fn:5" class="govuk-link" rel="footnote">[footnote 5]</a></sup>. To achieve this, the UK is investing more in <abbr title="artificial intelligence">AI</abbr> safety than any other country in the world. Today we are announcing over £100 million to help realise new <abbr title="artificial intelligence">AI</abbr> innovations and support regulators’ technical capabilities.</p> </li> <li> <p>Our regulatory framework builds on the existing strengths of both our thriving <abbr title="artificial intelligence">AI</abbr> industry and expert regulatory ecosystem. We are focused on ensuring that regulators are prepared to face the new challenges and opportunities that <abbr title="artificial intelligence">AI</abbr> can bring to their domains. By working closely with regulators to ensure cohesion across the landscape, we are ensuring that innovators can bring new products to market safely and quickly. Today we are announcing several new initiatives to make the UK an even better place to build and use <abbr title="artificial intelligence">AI</abbr> including £10 million to jumpstart regulator’s <abbr title="artificial intelligence">AI</abbr> capabilities; a new commitment by UK Research and Innovation (<abbr title="UK Research and Innovation">UKRI</abbr>) that future investments in <abbr title="artificial intelligence">AI</abbr> research will be leveraged to support regulator skills and expertise; and a £9 million partnership with the US on responsible <abbr title="artificial intelligence">AI</abbr> as part of our International Science Partnerships Fund<sup id="fnref:6" role="doc-noteref"><a href="#fn:6" class="govuk-link" rel="footnote">[footnote 6]</a></sup>. Through this and other work on <abbr title="artificial intelligence">AI</abbr> across government, the UK will continue to respond to risks proportionately and effectively, striving to lead thinking on <abbr title="artificial intelligence">AI</abbr> in the years to come. Through this and other work on <abbr title="artificial intelligence">AI</abbr> across government, the UK will continue to respond to risks proportionately and effectively, striving to lead thinking on <abbr title="artificial intelligence">AI</abbr> in the years to come.</p> </li> <li> <p>In March 2023, we published our <abbr title="artificial intelligence">AI</abbr> regulation white paper, setting out initial proposals to develop a pro-innovation regulatory framework for <abbr title="artificial intelligence">AI</abbr>. The proposed framework outlined five cross-sectoral principles for the UK’s existing regulators to interpret and apply within their remits. We also proposed a new central function to bring coherence to the regime and address regulatory gaps. This flexible and adaptive regulatory approach has enabled us to act decisively and respond to technological progress.</p> </li> <li> <p>Our context-based framework received strong support from stakeholders across society and we have acted quickly to implement it. We are pleased that a number of regulators are already taking action in line with our proposed approach, from the Competition and Market Authority’s (<abbr title="Competition and Markets Authority">CMA</abbr>) review of foundation models to the updated guidance on data protection and <abbr title="artificial intelligence">AI</abbr> by the Information Commissioner’s Office (<abbr title="Information Commissioner's Office">ICO</abbr>). We are asking a number of regulators to publish an update outlining their strategic approach to <abbr title="artificial intelligence">AI</abbr> by 30 April 2024.</p> </li> <li> <p>We have already started developing the central function to support effective risk monitoring, regulator coordination, and knowledge exchange. Our new £10 million package to boost regulators’ <abbr title="artificial intelligence">AI</abbr> capabilities, mentioned above, will help our regulators develop cutting-edge research and practical tools to build the foundations of their <abbr title="artificial intelligence">AI</abbr> expertise and everyday ability to address <abbr title="artificial intelligence">AI</abbr> risks in their domains. Today, we are also publishing new guidance to support regulators to implement the principles effectively and the Digital Regulation Cooperation Forum (<abbr title="Digital Regulation Cooperation Forum">DRCF</abbr>) is sharing details on the eligibility criteria for the support to be offered by the <abbr title="artificial intelligence">AI</abbr> and Digital Hub pilot.</p> </li> <li> <p>We are backing this approach with wider support for the <abbr title="artificial intelligence">AI</abbr> ecosystem, including committing over £1.5 billion in 2023 to build the next generation of supercomputers in the public sector and today announcing an £80 million boost in <abbr title="artificial intelligence">AI</abbr> research through the launch of nine new research hubs across the UK to propel transformative innovations. In November 2023, the Prime Minister brought together leading global actors in <abbr title="artificial intelligence">AI</abbr> for the first <abbr title="artificial intelligence">AI</abbr> Safety Summit where they discussed and agreed actions to address emerging risks posed by the development and deployment of the most powerful <abbr title="artificial intelligence">AI</abbr> systems. Leading <abbr title="artificial intelligence">AI</abbr> developers set out the steps they are already taking to make models safe and committed to sharing the most powerful <abbr title="artificial intelligence">AI</abbr> models with governments for testing so that we can ensure safety today and prepare for the risks of tomorrow.</p> </li> <li> <p>Our initial technical contribution to this international effort is through the creation of an <abbr title="artificial intelligence">AI</abbr> Safety Institute to lead evaluations and safety research in the UK government, in collaboration with partners across the world including in the US. The <abbr title="artificial intelligence">AI</abbr> Safety Summit underscored the global nature of <abbr title="artificial intelligence">AI</abbr> development and deployment, demonstrating the need for further work towards a coherent and collaborative approach to international governance.</p> </li> <li> <p>Our overall approach – combining cross-sectoral principles and a context-specific framework, international leadership and collaboration, and voluntary measures on developers – is right today as it allows us to keep pace with rapid and uncertain advances in <abbr title="artificial intelligence">AI</abbr>. However, the challenges posed by <abbr title="artificial intelligence">AI</abbr> technologies will ultimately require legislative action in every country once understanding of risk has matured. In this document, we build on our pro-innovation framework and pro-safety actions by setting out our early thinking and the questions that we will need to consider for the next stage of our regulatory approach.</p> </li> <li> <p>As <abbr title="artificial intelligence">AI</abbr> systems advance in capability and societal impact, it is clear that some mandatory measures will ultimately be required across all jurisdictions to address potential <abbr title="artificial intelligence">AI</abbr>-related harms, ensure public safety, and let us realise the transformative opportunities that the technology offers. However, acting before we properly understand the risks and appropriate mitigations would harm our ability to benefit from technological progress while leaving us unable to adapt quickly to emerging risks. We are going to take our time to get this right – we will legislate when we are confident that it is the right thing to do.</p> </li> <li> <p>We have placed a particular emphasis on the challenges that highly capable general-purpose <abbr title="artificial intelligence">AI</abbr> systems pose to a context-based framework. Here we lay out a pro-innovation case for further targeted binding requirements on the small number of organisations developing highly capable general-purpose <abbr title="artificial intelligence">AI</abbr> systems to ensure that they are accountable for making these technologies sufficiently safe. This can be done while allowing our expert regulators to provide effective rules for the use of <abbr title="artificial intelligence">AI</abbr> within their remits.</p> </li> <li> <p>In the coming months, we will formally establish our activities to support regulator capabilities and coordination, including a new steering committee with government and regulator representatives to support coordination across the <abbr title="artificial intelligence">AI</abbr> governance landscape. We will conduct targeted consultations on our cross-economy <abbr title="artificial intelligence">AI</abbr> risk register and plan to assess the regulatory framework. We will continue our work to address the key issues of today, from electoral interference to discrimination to intellectual property law, and the most pressing risks of tomorrow, such as biosecurity and <abbr title="artificial intelligence">AI</abbr> alignment. We will also continue to lead international conversations on <abbr title="artificial intelligence">AI</abbr> governance across a range of fora and initiatives in the lead up to the next <abbr title="artificial intelligence">AI</abbr> Safety Summits in the Republic of Korea and France.</p> </li> </ul> <h2 id="glossary"> <span class="number">3. </span> Glossary</h2> <p><strong>Adaptivity</strong>: The ability to see patterns and make decisions in ways not directly envisioned by human programmers.</p> <p><strong>Artificial General Intelligence (<abbr title="Artificial General Intelligence">AGI</abbr>)</strong>: A theoretical form of advanced <abbr title="artificial intelligence">AI</abbr> that would have capabilities that compare to or exceed humans across most economically valuable work<sup id="fnref:7" role="doc-noteref"><a href="#fn:7" class="govuk-link" rel="footnote">[footnote 7]</a></sup>. A number of <abbr title="artificial intelligence">AI</abbr> companies have publicly stated their aim to build <abbr title="Artificial General Intelligence">AGI</abbr> and believe it may be achievable within the next twenty years. Other experts believe we may not build <abbr title="Artificial General Intelligence">AGI</abbr> for many decades, if ever.</p> <p><strong><abbr title="artificial intelligence">AI</abbr> agents</strong>: Autonomous <abbr title="artificial intelligence">AI</abbr> systems that perform multiple sequential steps – sometimes including actions like browsing the internet, sending emails, or sending instructions to physical equipment – to try and complete a high-level task or goal.</p> <p><strong><abbr title="artificial intelligence">AI</abbr> deployers</strong>: Any individual or organisation that supplies or uses an <abbr title="artificial intelligence">AI</abbr> application to provide a product or service to an end user.</p> <p><strong><abbr title="artificial intelligence">AI</abbr> developers</strong>: Organisations or individuals who design, build, train, adapt, or combine <abbr title="artificial intelligence">AI</abbr> models and applications.</p> <p><strong><abbr title="artificial intelligence">AI</abbr> end user</strong>: Any intended or actual individual or organisation that uses or consumes an <abbr title="artificial intelligence">AI</abbr>-based product or service as it is deployed.</p> <p><strong><abbr title="artificial intelligence">AI</abbr> life cycle</strong>: All events and processes that relate to an <abbr title="artificial intelligence">AI</abbr> system’s lifespan, from inception to decommissioning, including its design, research, training, development, deployment, integration, operation, maintenance, sale, use, and governance.</p> <p><strong><abbr title="artificial intelligence">AI</abbr> risks</strong>: The potential negative or harmful outcomes arising from the development or deployment of <abbr title="artificial intelligence">AI</abbr> systems.</p> <p><strong>Alignment</strong>: The process of ensuring an <abbr title="artificial intelligence">AI</abbr> system’s goals and behaviours are in line with human values and intentions.</p> <p><strong>Application Programming Interface (API)</strong>: A set of rules and protocols that enables integration and communication between <abbr title="artificial intelligence">AI</abbr> systems and other software applications.</p> <p><strong>Autonomous</strong>: Capable of operating, taking actions, or making decisions without the express intent or oversight of a human.</p> <p><strong>Capabilities</strong>: The range of tasks or functions that an <abbr title="artificial intelligence">AI</abbr> system can perform and the proficiency with which it can perform them.</p> <p><strong>Compute</strong>: Computational processing power, including Central Processing Units (<abbr title="Central Processing Units">CPUs</abbr>), Graphics Processing Units (<abbr title="Graphics Processing Units">GPUs</abbr>), and other hardware, used to run <abbr title="artificial intelligence">AI</abbr> models and algorithms.</p> <p><strong>Developers of highly capable general-purpose systems</strong>: A subsection of <abbr title="artificial intelligence">AI</abbr> developers, these organisations invest large amounts of resource into designing, building, and pre-training the most capable <abbr title="artificial intelligence">AI</abbr> foundation models. These models can underpin a wide range of <abbr title="artificial intelligence">AI</abbr> applications and may be deployed directly or adapted by downstream <abbr title="artificial intelligence">AI</abbr> developers.</p> <p><strong>Disinformation</strong>: Deliberately false information spread with the intent to deceive or mislead.</p> <p><strong>Foundation models</strong>: Machine learning models trained on very large amounts of data that can be adapted to a wide range of tasks.</p> <p><strong>Frontier <abbr title="artificial intelligence">AI</abbr></strong>: For the <abbr title="artificial intelligence">AI</abbr> Safety Summit, we defined frontier <abbr title="artificial intelligence">AI</abbr> as models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models. In this paper, we focus on highly capable general-purpose <abbr title="artificial intelligence">AI</abbr> model developers to target our proposals for new responsibilities.</p> <p><strong>Misinformation</strong>: Incorrect or misleading information spread without harmful intent.</p> <p><strong>Safety and security</strong>: The protection, wellbeing, and autonomy of civil society and the population<sup id="fnref:8" role="doc-noteref"><a href="#fn:8" class="govuk-link" rel="footnote">[footnote 8]</a></sup>. In this publication, safety is often used to describe prevention of or protection against <abbr title="artificial intelligence">AI</abbr>-related harms. <abbr title="artificial intelligence">AI</abbr> security refers to protecting <abbr title="artificial intelligence">AI</abbr> systems from technical interference such as cyber-attacks<sup id="fnref:9" role="doc-noteref"><a href="#fn:9" class="govuk-link" rel="footnote">[footnote 9]</a></sup>.</p> <p><strong>Superhuman performance</strong>: When an <abbr title="artificial intelligence">AI</abbr> model demonstrates capabilities that exceed human ability benchmarking for a specific task or activity.</p> <div class="call-to-action"> <h3 id="box-1-different-types-of-ai-systems">Box 1: Different types of <abbr title="artificial intelligence">AI</abbr> systems</h3> <p>In our discussion paper on frontier <abbr title="artificial intelligence">AI</abbr> capabilities and risks<sup id="fnref:10" role="doc-noteref"><a href="#fn:10" class="govuk-link" rel="footnote">[footnote 10]</a></sup>, we noted that definitions of <abbr title="artificial intelligence">AI</abbr> are often challenging due to the quick advancements in the technology.</p> <p>For the purposes of developing a proportionate regulatory approach that effectively addresses the risks posed by the most powerful <abbr title="artificial intelligence">AI</abbr> systems, we currently distinguish between:</p> <ol> <li> <p><strong>Highly capable general-purpose <abbr title="artificial intelligence">AI</abbr></strong>: Foundation models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models. Generally, such models will span from novice through to expert capabilities with some even showing superhuman performance across a range of tasks.</p> </li> <li> <p><strong>Highly capable narrow <abbr title="artificial intelligence">AI</abbr></strong>: Foundation models that can perform a narrow set of tasks, normally within a specific field such as biology, with capabilities that match or exceed those present in today’s most advanced models. Generally, such models will demonstrate superhuman abilities on these narrow tasks or domains.</p> </li> <li> <p><strong>Agentic <abbr title="artificial intelligence">AI</abbr> or <abbr title="artificial intelligence">AI</abbr> agents</strong>: An emerging subset of <abbr title="artificial intelligence">AI</abbr> technologies that can competently complete tasks over long timeframes and with multiple steps. These systems can use tools such as coding environments, the internet, and narrow <abbr title="artificial intelligence">AI</abbr> models to complete tasks.</p> </li> </ol> </div> <h2 id="introduction"> <span class="number">4. </span> Introduction</h2> <p>1. The UK’s <abbr title="artificial intelligence">AI</abbr> sector is thriving. The <abbr title="artificial intelligence">AI</abbr> industry in the UK employs over 50,000 people and contributes £3.7 billion to economy<sup id="fnref:11" role="doc-noteref"><a href="#fn:11" class="govuk-link" rel="footnote">[footnote 11]</a></sup>. Our universities produce some of the best <abbr title="artificial intelligence">AI</abbr> research and talent, and the UK is home to the third largest number of <abbr title="artificial intelligence">AI</abbr> unicorns and start-ups in the world<sup id="fnref:12" role="doc-noteref"><a href="#fn:12" class="govuk-link" rel="footnote">[footnote 12]</a></sup>.</p> <p>2. Our goal is to make the UK a great place to build and use <abbr title="artificial intelligence">AI</abbr> that changes our lives for the better. <abbr title="artificial intelligence">AI</abbr> is the defining technology of our time and the UK is leading the world with our response.</p> <p>3. In March 2023, we published a white paper setting out our proposals to establish a regulatory framework for <abbr title="artificial intelligence">AI</abbr> to drive safe, responsible innovation<sup id="fnref:13" role="doc-noteref"><a href="#fn:13" class="govuk-link" rel="footnote">[footnote 13]</a></sup>. We set five principles for regulators to interpret and apply within their domains. We also included proposals for a central function within government to conduct a range of activities such as risk assessment and regulatory coordination to support the adaptability and coherence of our approach.</p> <p>4. We held a 12-week public consultation on our proposals<sup id="fnref:14" role="doc-noteref"><a href="#fn:14" class="govuk-link" rel="footnote">[footnote 14]</a></sup>. We have now analysed the evidence (see Annex A for details) which has informed our approach. We thank everyone for their submissions. We have also built into our response the key achievements from the <abbr title="artificial intelligence">AI</abbr> Safety Summit in November 2023, as well as themes from our engagement ahead of the Summit.</p> <h3 id="ai-white-paper-consultation-and-ai-summit-activities"> <abbr title="artificial intelligence">AI</abbr> White Paper consultation and <abbr title="artificial intelligence">AI</abbr> Summit activities</h3> <figure class="image embedded"><div class="img"><img src="https://assets.publishing.service.gov.uk/media/65b8ddf3e9e10a0013031086/ai-white-paper-consultation-ai-summit-activities.svg" alt=""></div> <figcaption><p>AI White Paper consultation and AI Summit activities.</p></figcaption></figure> <p>5. The pace of <abbr title="artificial intelligence">AI</abbr> development continues to accelerate. In the run up to the <abbr title="artificial intelligence">AI</abbr> Safety Summit, we published a discussion paper on <abbr title="artificial intelligence">AI</abbr> risks and capabilities that showed these trends are likely to continue in line with companies building these technologies using more compute, more data, and increasingly efficient algorithms<sup id="fnref:15" role="doc-noteref"><a href="#fn:15" class="govuk-link" rel="footnote">[footnote 15]</a></sup>. Some frontier <abbr title="artificial intelligence">AI</abbr> labs have stated their goal to build <abbr title="artificial intelligence">AI</abbr> systems that are more capable than humans at a range of tasks<sup id="fnref:16" role="doc-noteref"><a href="#fn:16" class="govuk-link" rel="footnote">[footnote 16]</a></sup>.</p> <p>6. Enhanced capabilities bring new opportunities. <abbr title="artificial intelligence">AI</abbr> is already changing the way that we live and work. Workers using <abbr title="artificial intelligence">AI</abbr> in sectors ranging from manufacturing to finance have reported improvements to their job enjoyment, performance, and health<sup id="fnref:17" role="doc-noteref"><a href="#fn:17" class="govuk-link" rel="footnote">[footnote 17]</a></sup>. <abbr title="artificial intelligence">AI</abbr> will change the tasks we do at work and the skills we need to do them well<sup id="fnref:18" role="doc-noteref"><a href="#fn:18" class="govuk-link" rel="footnote">[footnote 18]</a></sup>. Recent <abbr title="artificial intelligence">AI</abbr> developments are also changing how we spend our leisure time, with powerful <abbr title="artificial intelligence">AI</abbr> systems underpinning the chatbots and image generators that have become some of the fastest growing consumer applications in history<sup id="fnref:19" role="doc-noteref"><a href="#fn:19" class="govuk-link" rel="footnote">[footnote 19]</a></sup>. Highly capable <abbr title="artificial intelligence">AI</abbr> is already transforming sectors, from helping us to conserve our wildlife<sup id="fnref:20" role="doc-noteref"><a href="#fn:20" class="govuk-link" rel="footnote">[footnote 20]</a></sup> to changing the ways that we identify and treat disease<sup id="fnref:21" role="doc-noteref"><a href="#fn:21" class="govuk-link" rel="footnote">[footnote 21]</a></sup>.</p> <p>7. However, more powerful <abbr title="artificial intelligence">AI</abbr> also poses new and amplified risks. For example, <abbr title="artificial intelligence">AI</abbr> chatbots may make false information more prominent<sup id="fnref:22" role="doc-noteref"><a href="#fn:22" class="govuk-link" rel="footnote">[footnote 22]</a></sup> or a highly capable <abbr title="artificial intelligence">AI</abbr> system may be misused to enable crime. For instance, a model designed for drug discovery could potentially be accessed maliciously to create harmful compounds<sup id="fnref:23" role="doc-noteref"><a href="#fn:23" class="govuk-link" rel="footnote">[footnote 23]</a></sup>.</p> <p>8. <abbr title="artificial intelligence">AI</abbr> may also fundamentally transform life in ways that are hard to predict. For instance, future agentic <abbr title="artificial intelligence">AI</abbr> systems may be able to pursue complex goals with limited human supervision, raising questions around how <abbr title="artificial intelligence">AI</abbr> agents remain attributable, ask for approval before taking action, and can be interrupted.</p> <p>9. <abbr title="artificial intelligence">AI</abbr> technologies present significant uncertainties that require an agile regulatory approach that supports innovation whilst adapting to address new risks. In this consultation response, we show how our flexible approach is already addressing key <abbr title="artificial intelligence">AI</abbr>-related risks and how we are further strengthening this framework (section 5.1). We also set out initial thinking on potential new responsibilities on the developers of highly capable general-purpose <abbr title="artificial intelligence">AI</abbr> systems alongside the voluntary commitments secured at the <abbr title="artificial intelligence">AI</abbr> Safety Summit (section 5.2). In section 6, we provide a summary of the evidence we received to our consultation along with our formal response.</p> <h2 id="a-regulatory-framework-to-keep-pace-with-a-rapidly-advancing-technology"> <span class="number">5. </span> A regulatory framework to keep pace with a rapidly advancing technology</h2> <p>10. In the <abbr title="artificial intelligence">AI</abbr> regulation white paper, we proposed five cross-sectoral principles for existing regulators to interpret and apply within their remits in order to drive safe, responsible <abbr title="artificial intelligence">AI</abbr> innovation<sup id="fnref:24" role="doc-noteref"><a href="#fn:24" class="govuk-link" rel="footnote">[footnote 24]</a></sup>. These are:</p> <ul> <li>Safety, security and robustness.</li> <li>Appropriate transparency and explainability.</li> <li>Fairness.</li> <li>Accountability and governance.</li> <li>Contestability and redress.</li> </ul> <p>11. We welcome the strong support for these principles through the consultation. They are the foundation of our approach. We remain committed to a context-based approach that avoids unnecessary blanket rules that apply to all <abbr title="artificial intelligence">AI</abbr> technologies, regardless of how they are used. This is the best way to ensure an agile approach that stands the test of time.</p> <p>12. We are pleased to see how regulators are already independently implementing our principles. In the white paper we highlighted the importance of a central function to support regulator capabilities and coordination. We have made good progress establishing this function within the government. We set out below how we are further strengthening it, including new funding, in section 5.1. We also show how regulators and the government are addressing some of the most important issues facing us today.</p> <p>13. In section 5.2, we set out some of the regulatory challenges posed by the rapid development of highly capable general-purpose systems; how we are currently tackling these through voluntary measures, including those agreed at the <abbr title="artificial intelligence">AI</abbr> Safety Summit; and which additional responsibilities may be required in the future to address risks effectively.</p> <h3 id="delivering-a-proportionate-context-based-approach-to-regulate-the-use-of-ai"> <span class="number">5.1. </span> Delivering a proportionate, context-based approach to regulate the use of <abbr title="artificial intelligence">AI</abbr> </h3> <h4 id="regulators-are-taking-active-steps-in-line-with-the-framework">5.1.1. Regulators are taking active steps in line with the framework</h4> <p>14. Since the publication of the <abbr title="artificial intelligence">AI</abbr> regulation white paper, a number of regulators have set out work in line with our principles-based approach. For example, the Competition and Markets Authority (<abbr title="Competition and Markets Authority">CMA</abbr>) published a review of foundation models to understand the opportunities and risks for competition and consumer protection<sup id="fnref:25" role="doc-noteref"><a href="#fn:25" class="govuk-link" rel="footnote">[footnote 25]</a></sup>. The Information Commissioner’s Office (<abbr title="Information Commissioner's Office">ICO</abbr>) updated guidance on how data protection laws apply to <abbr title="artificial intelligence">AI</abbr> systems to include fairness<sup id="fnref:26" role="doc-noteref"><a href="#fn:26" class="govuk-link" rel="footnote">[footnote 26]</a></sup>. To ensure the safety of <abbr title="artificial intelligence">AI</abbr>, regulators such as the Office of Gas and Electricity Markets (<abbr title="Office of Gas and Electricity Markets">Ofgem</abbr>) and Civil Aviation Authority (<abbr title="Civil Aviation Authority">CAA</abbr>) are working on <abbr title="artificial intelligence">AI</abbr> strategies to be published later this year. This builds on regulator work that led the way on clarifying how existing frameworks apply to <abbr title="artificial intelligence">AI</abbr> risks in their domain, such as the Medicines and Healthcare products Regulatory Agency (<abbr title="Medicines and Healthcare products Regulatory Agency">MHRA</abbr>) Software and <abbr title="artificial intelligence">AI</abbr> as a Medical Device Change Programme 2021 on requirements for software and <abbr title="artificial intelligence">AI</abbr> used in medical devices<sup id="fnref:27" role="doc-noteref"><a href="#fn:27" class="govuk-link" rel="footnote">[footnote 27]</a></sup>.</p> <p>15. It is important that the public have full visibility of how regulators are incorporating the principles into their work. The government has written to a number of regulators impacted by <abbr title="artificial intelligence">AI</abbr> to ask them to publish an update outlining their strategic approach to <abbr title="artificial intelligence">AI</abbr> by 30 April <sup id="fnref:28" role="doc-noteref"><a href="#fn:28" class="govuk-link" rel="footnote">[footnote 28]</a></sup>. We are encouraging regulators to include:</p> <ul> <li>An outline of the steps they are taking in line with the expectations set out in the white paper.</li> <li>Analysis of <abbr title="artificial intelligence">AI</abbr>-related risks in the sectors and activities they regulate and the actions they are taking to address these.</li> <li>An explanation of their current capability to address <abbr title="artificial intelligence">AI</abbr> as compared with their assessment of requirements, and the actions they are taking to ensure they have the right structures and skills in place.</li> <li>A forward look of plans and activities over the coming 12 months.</li> </ul> <p>16. When we published the <abbr title="artificial intelligence">AI</abbr> regulation white paper, we proposed that the principles would be established on a non-statutory basis. Many consultation respondents noted the potential benefits of a statutory duty on regulators, but some acknowledged that implementing the regime on a non-statutory basis in the first instance would allow for important flexibilities. We think a non-statutory approach currently offers critical adaptability – especially while we are still establishing our approach – but we will keep this under review. Our decision will be informed in part by our review of the plans published by regulators, as set out above; our review of regulator powers, as set out below; and in line with our wider approach to <abbr title="artificial intelligence">AI</abbr> legislation, such as the introduction of targeted binding measures (see section 5.2).</p> <h4 id="supporting-regulatory-capability-and-coordination">5.1.2 Supporting regulatory capability and coordination</h4> <p>17. The systemic changes driven by <abbr title="artificial intelligence">AI</abbr> demand a system-wide response – our individual regulators cannot successfully address the opportunities and risks presented by <abbr title="artificial intelligence">AI</abbr> technologies within their remits by acting in isolation. In the <abbr title="artificial intelligence">AI</abbr> regulation white paper, we proposed a new central function, established within government, to monitor and assess risks across the whole economy and support regulator coordination and clarity.</p> <p>18. The proposal for a central function was widely welcomed by stakeholders who noted it is critical to the effective delivery of the <abbr title="artificial intelligence">AI</abbr> regulation framework. Many stressed that, without such a function, there is a risk of regulatory overlaps, gaps, and poor coordination as multiple regulators consider the impact of <abbr title="artificial intelligence">AI</abbr> in their domains.</p> <p>19. We have already started to establish this function in a range of ways:</p> <p>i. <strong>Risk assessment</strong>: We have recruited a new multidisciplinary team to undertake cross-sectoral risk monitoring within the Department for Science, Innovation and Technology (<abbr title="Department for Science, Innovation and Technology">DSIT</abbr>), bringing together expertise in risk, regulation, and <abbr title="artificial intelligence">AI</abbr> with backgrounds in data science, engineering, economics, and law. This team will provide continuous examination of cross-cutting <abbr title="artificial intelligence">AI</abbr> risks, including evaluating the effectiveness of interventions by both the government and regulators. In 2024, we will launch a targeted consultation on a cross-economy <abbr title="artificial intelligence">AI</abbr> risk register to ensure it comprehensively captures the range of risks. It will provide a single source of truth on <abbr title="artificial intelligence">AI</abbr> risks which regulators, government departments, and external groups can use. It will also support government work to identify any risks that fall across or in between the remits of regulators so we can identify where there are gaps or existing regulation is ineffective and prioritise further action. In addition to the risk register, we are considering the added value of developing a risk management framework, similar to the one developed in the US by the National Institute of Standards and Technology (<abbr title="National Institute of Standards and Technology">NIST</abbr>).</p> <p>ii. <strong>Regulator capabilities</strong>: Effective regulation relies on regulators having the right skills, tools, and expertise. While some regulators have been able to put the right expertise in place to address <abbr title="artificial intelligence">AI</abbr>, others are less prepared. We are announcing £10 million for regulators to develop the capabilities and tools they need to adapt and respond to <abbr title="artificial intelligence">AI</abbr>. We are investing in regulators today to future-proof their capabilities for tomorrow. The funding will enable regulators to collaborate to create, adapt, and improve practical tools to address <abbr title="artificial intelligence">AI</abbr> risks and opportunities within and across their remits. It will enable regulators to carry out research and development to produce novel, actionable insights that will set the foundation of their approaches for years to come. We will work closely with regulators in the coming months to identify the most promising opportunities to leverage this funding. This builds on the recent announcement that the government will explore how to further support regulators to develop the specialist skills necessary to regulate emerging technologies, including options for increased flexibility on pay and conditions<sup id="fnref:29" role="doc-noteref"><a href="#fn:29" class="govuk-link" rel="footnote">[footnote 29]</a></sup>.</p> <p>iii. <strong>Regulator powers</strong>: We recognise the need to assess the existing powers and remits of the UK’s regulators to ensure they are equipped to address <abbr title="artificial intelligence">AI</abbr> risks and opportunities in their domains and implement the principles in a consistent and comprehensive way. We will, therefore, work with government departments and regulators to analyse and review potential gaps in existing regulatory powers and remits.</p> <p>iv. <strong>Coordination</strong>: In the coming months we will formalise our regulator coordination activities. To support and guide this work, we will establish a steering committee with government representatives and key regulators to support knowledge exchange and coordination on <abbr title="artificial intelligence">AI</abbr> governance by spring 2024. We continue to support regulatory coordination more widely, including working with bodies such as the Digital Regulation Cooperation Forum (<abbr title="Digital Regulation Cooperation Forum">DRCF</abbr>). Today we have published new guidance for regulators to support them to interpret and apply our principles.</p> <p>v. <strong>Research and innovation</strong>: We are working closely with UK Research and Innovation  (<abbr title="UK Research and Innovation">UKRI</abbr>) to ensure the government’s wider investments in <abbr title="artificial intelligence">AI</abbr> <abbr title="research and development">R&amp;D</abbr> can support the government’s safety agenda. This includes a new commitment by <abbr title="UK Research and Innovation">UKRI</abbr> to improve links between regulators and the skills, expertise, and activities supported by <abbr title="UK Research and Innovation">UKRI</abbr> investments in <abbr title="artificial intelligence">AI</abbr> research such as Responsible <abbr title="artificial intelligence">AI</abbr> UK, the Trustworthy Autonomous Systems hub, the <abbr title="UK Research and Innovation">UKRI</abbr> <abbr title="artificial intelligence">AI</abbr> Centres for Doctoral Training, and the Alan Turing Institute. This will ensure the UK’s strength in <abbr title="artificial intelligence">AI</abbr> research is fully utilised in our regulatory framework. This work builds on our previous commitment of £250 million through the <abbr title="UK Research and Innovation">UKRI</abbr> Technology Missions Fund to secure the UK’s global leadership in critical technologies<sup id="fnref:30" role="doc-noteref"><a href="#fn:30" class="govuk-link" rel="footnote">[footnote 30]</a></sup>. <abbr title="UK Research and Innovation">UKRI</abbr> is today announcing that £19 million of the Technology Missions Fund will support Phase 2 of the Accelerating Trustworthy <abbr title="artificial intelligence">AI</abbr> competition, supporting 21 projects delivered through the Innovate UK BridgeAI programme, to accelerate the adoption of trusted and responsible <abbr title="artificial intelligence">AI</abbr> and machine learning.</p> <p>vi. <strong>Ease of compliance</strong>: Regulation must work for innovators. We are supporting innovators and businesses to get new products to market safely and efficiently by funding a pilot multi-agency advisory service delivered by the <abbr title="Digital Regulation Cooperation Forum">DRCF</abbr><sup id="fnref:31" role="doc-noteref"><a href="#fn:31" class="govuk-link" rel="footnote">[footnote 31]</a></sup>. This will particularly help innovators navigate the legal and regulatory requirements they need to meet before launch. The online portal for the pilot 1. <abbr title="Digital Regulation Cooperation Forum">DRCF</abbr> <abbr title="artificial intelligence">AI</abbr> and Digital Hub and the application window are due to launch in the spring. Insights from the pilot will inform the implementation of our regulatory approach. Further details on the eligibility criteria for the support to be offered by the pilot have been published by the <abbr title="Digital Regulation Cooperation Forum">DRCF</abbr> today alongside this consultation response.</p> <p>vii. <strong>Public trust</strong>: We want businesses, consumers, and the public to have confidence in <abbr title="artificial intelligence">AI</abbr> technologies. We will build trust by continuing to support work on assurance techniques and technical standards. The UK <abbr title="artificial intelligence">AI</abbr> Standards Hub, launched in 2022, provides practical tools and guides for businesses, organisations, and individuals to effectively use digital technical standards and participate in their development<sup id="fnref:32" role="doc-noteref"><a href="#fn:32" class="govuk-link" rel="footnote">[footnote 32]</a></sup> In 2023, the government collaborated with techUK to launch the Portfolio of <abbr title="artificial intelligence">AI</abbr> Assurance Techniques announced in the <abbr title="artificial intelligence">AI</abbr> regulation white paper<sup id="fnref:33" role="doc-noteref"><a href="#fn:33" class="govuk-link" rel="footnote">[footnote 33]</a></sup>. In spring 2024, we will publish an “Introduction to <abbr title="artificial intelligence">AI</abbr> assurance” to further promote the value of <abbr title="artificial intelligence">AI</abbr> assurance and help businesses and organisations build their understanding of the techniques for safe and trustworthy systems. Alongside this, we undertake regular research with the public to ensure the government’s approach to <abbr title="artificial intelligence">AI</abbr> is aligned with our wider values<sup id="fnref:34" role="doc-noteref"><a href="#fn:34" class="govuk-link" rel="footnote">[footnote 34]</a></sup>.</p> <p>viii. <strong>Monitoring and Evaluation</strong>: We are developing a monitoring and evaluation plan that allows us to continuously assess the effectiveness of our regime as <abbr title="artificial intelligence">AI</abbr> technologies change. We will conduct a targeted consultation with a range of stakeholders on our proposed plan to assess the regulatory framework in spring 2024. As part of this, we will seek detailed views on our proposed metrics and data sources.</p> <p>20. <abbr title="artificial intelligence">AI</abbr> regulation will only work within a wider ecosystem that champions the industry. In 2023, the government committed over £1.5 billion to build public sector supercomputers, including the <abbr title="artificial intelligence">AI</abbr> Research Resource and an exascale computer. We are also working closely with the private sector to support investment, such as Microsoft’s announcement of £2.5 billion for <abbr title="artificial intelligence">AI</abbr>-related data centres in November 2023. The £80 million investment in <abbr title="artificial intelligence">AI</abbr> hubs that we are announcing today will enable <abbr title="artificial intelligence">AI</abbr> to evolve and tackle complex problems across applications from healthcare treatments to power-efficient electronics. The government is also conducting a wider review of the UK <abbr title="artificial intelligence">AI</abbr> supply chain to ensure we maintain our strategic advantage as a world leader in these technologies.</p> <p>21. Finally, to drive coordinated action across government we have established lead <abbr title="artificial intelligence">AI</abbr> Ministers across all departments to bring together work on risks and opportunities driven by <abbr title="artificial intelligence">AI</abbr> in their sectors and to oversee implementation of frameworks and guidelines for public sector usage of <abbr title="artificial intelligence">AI</abbr>. We are also establishing a new Inter-Ministerial Group to drive effective coordination across government on <abbr title="artificial intelligence">AI</abbr> issues. Further to this, we are strengthening the team working on <abbr title="artificial intelligence">AI</abbr> within the <abbr title="Department for Science, Innovation and Technology">DSIT</abbr>. In February 2023, we had a team of around 20 people working on <abbr title="artificial intelligence">AI</abbr> issues. This had grown to over 160 across the newly established <abbr title="artificial intelligence">AI</abbr> Policy Directorate and the <abbr title="artificial intelligence">AI</abbr> Safety Institute by the end of 2023, with plans to expand to more than 270 people in 2024. In recognition of the fact that <abbr title="artificial intelligence">AI</abbr> is a top priority for the Secretary of State and has become central to the wider work of the department and government, we will no longer maintain the branding of a separate Office for <abbr title="artificial intelligence">AI</abbr>. Similarly, the Centre for Data Ethics and Innovation (<abbr title="Centre for Data Ethics and Innovation">CDEI</abbr>) is changing its name to the Responsible Technology Adoption Unit to more accurately reflect its mission. The name highlights the directorate’s role in developing tools and techniques that enable responsible adoption of <abbr title="artificial intelligence">AI</abbr> in the private and public sectors, in support of the department’s central mission.</p> <h4 id="ai-governance-landscape"> <abbr title="artificial intelligence">AI</abbr> governance landscape</h4> <figure class="image embedded"><div class="img"><img src="https://assets.publishing.service.gov.uk/media/65c0b135c4319100141a4511/ai-governance-landscape.svg" alt=""></div> <figcaption><p>AI regulation landscape.</p></figcaption></figure> <p><abbr title="Department for Science, Innovation and Technology">DSIT</abbr> - the government department responsible for overall responsibility for <abbr title="artificial intelligence">AI</abbr> policy, including regulation.</p> <p><strong>Image 1</strong>: A diagram of the <abbr title="artificial intelligence">AI</abbr> regulation landscape showing the relationships between the government, regulators, industry, and the wider ecosystem.</p> <h4 id="tackling-specific-risks">5.1.3 Tackling specific risks</h4> <p>22. There are three broad categories of <abbr title="artificial intelligence">AI</abbr> risk: societal harms; misuse risks; and autonomy risks<sup id="fnref:35" role="doc-noteref"><a href="#fn:35" class="govuk-link" rel="footnote">[footnote 35]</a></sup>. Below we outline examples of how the government and regulators are responding to specific risks in line with our principles. This summary illustrates the wide range of work already happening to ensure the benefits of <abbr title="artificial intelligence">AI</abbr> innovation can be realised safely and responsibly. It is not intended to be exhaustive or prioritise certain risks over others.</p> <p>23. In addition to the work to address specific risks outlined below, we are today announcing £2 million of Arts and Humanities Research Council (<abbr title="Arts and Humanities Research Council">AHRC</abbr>) funding to support translational research that will help to define responsible <abbr title="artificial intelligence">AI</abbr> across sectors such as education, policing, and creative industries. These projects, part of the <abbr title="Arts and Humanities Research Council">AHRC</abbr>’s Bridging Responsible <abbr title="artificial intelligence">AI</abbr> Divides (<abbr title="Bridging Responsible AI Divides">BRAID</abbr>) work<sup id="fnref:36" role="doc-noteref"><a href="#fn:36" class="govuk-link" rel="footnote">[footnote 36]</a></sup>, will produce recommendations to inform future work in this area and demonstrate how the UK is at the forefront of embedding <abbr title="artificial intelligence">AI</abbr> across key sectors. In addition to the scoping projects, <abbr title="Arts and Humanities Research Council">AHRC</abbr> are confirming a further £7.6 million to fund a second phase of the <abbr title="Bridging Responsible AI Divides">BRAID</abbr> programme, extending activities to 2027/28. The next phase will include a new cohort of large-scale demonstrator projects, further rounds of <abbr title="Bridging Responsible AI Divides">BRAID</abbr> Fellowships, and new professional <abbr title="artificial intelligence">AI</abbr> skills provisions, co-developed with industry and other partners.</p> <h5 id="societal-harms">Societal harms</h5> <p><strong>Preparing UK workers for an <abbr title="artificial intelligence">AI</abbr> enabled economy</strong></p> <p>24. <abbr title="artificial intelligence">AI</abbr> is revolutionising the workplace. While the adoption of these technologies can bring new, higher quality jobs, it can also create and amplify a range of risks, such as workplace surveillance and discrimination in recruitment, that the government and regulators are already working to address. We want to harness the growth potential of <abbr title="artificial intelligence">AI</abbr> but this must not be at the expense of employment rights and protections for workers. The UK’s robust system of legislation and enforcement for employment protections, including specialist labour tribunals, sets a strong foundation for workers. To ensure the use of <abbr title="artificial intelligence">AI</abbr> in <abbr title="Human Resources">HR</abbr> and recruitment is safe, responsible, and fair, the Department for Science, Innovation and Technology (<abbr title="Department for Science, Innovation and Technology">DSIT</abbr>)  will provide updated guidance in spring 2024.</p> <p>25. Since 2018 we have funded a £290 million package of <abbr title="artificial intelligence">AI</abbr> skills and talent initiatives to make sure that <abbr title="artificial intelligence">AI</abbr> education and awareness is accessible across the UK. This includes funding 24 <abbr title="artificial intelligence">AI</abbr> Centres for Doctoral Training which will train over 1,500 <abbr title="Doctorate of Philosophy">PhD</abbr> students. We are also working with Innovate UK and the Alan Turing Institute to develop guidance that sets out the core <abbr title="artificial intelligence">AI</abbr> skills people need, from ‘<abbr title="artificial intelligence">AI</abbr> citizens’ to ‘<abbr title="artificial intelligence">AI</abbr> professionals’. We published draft guidance for public comment in November 2023 and we intend to publish a final version and a full skills framework in spring 2024<sup id="fnref:37" role="doc-noteref"><a href="#fn:37" class="govuk-link" rel="footnote">[footnote 37]</a></sup>.</p> <p>26. It is hard to predict, at this stage, exactly how the labour market will change due to <abbr title="artificial intelligence">AI</abbr>. Some sectors are concerned that <abbr title="artificial intelligence">AI</abbr> will displace jobs through automation<sup id="fnref:38" role="doc-noteref"><a href="#fn:38" class="govuk-link" rel="footnote">[footnote 38]</a></sup>. The Department for Education (<abbr title="Department for Education">DfE</abbr>) has published initial work on the impact of <abbr title="artificial intelligence">AI</abbr> on UK jobs, sectors, qualifications, and training pathways<sup id="fnref:39" role="doc-noteref"><a href="#fn:39" class="govuk-link" rel="footnote">[footnote 39]</a></sup>. We can be confident that we will need new <abbr title="artificial intelligence">AI</abbr>-related skills through national qualifications and training provision. The government has invested £3.8 billion in higher and further education in this parliament to make the skills system employer-led and responsive to future needs. Along with <abbr title="Department for Education">DfE</abbr>’s Apprenticeships<sup id="fnref:40" role="doc-noteref"><a href="#fn:40" class="govuk-link" rel="footnote">[footnote 40]</a></sup> and Skills Bootcamps<sup id="fnref:41" role="doc-noteref"><a href="#fn:41" class="govuk-link" rel="footnote">[footnote 41]</a></sup>, the new Lifelong Learning Entitlement reforms<sup id="fnref:42" role="doc-noteref"><a href="#fn:42" class="govuk-link" rel="footnote">[footnote 42]</a></sup> and Advanced British Standard<sup id="fnref:43" role="doc-noteref"><a href="#fn:43" class="govuk-link" rel="footnote">[footnote 43]</a></sup> will put academic and technical education in England on an equal footing and ensure our skills and education system is fit for the future.</p> <p><strong>Enabling <abbr title="artificial intelligence">AI</abbr> innovation and protecting intellectual property</strong></p> <p>27. The <abbr title="artificial intelligence">AI</abbr> technology and creative sectors, as well as our media, are strongest when they work together in partnership. This government is committed to supporting these sectors so that they continue to flourish and are able to compete internationally. The Department for Culture, Media and Sport (<abbr title="Department for Digital, Culture, Media and Sport">DCMS</abbr>) is working closely with publishers, the music industry, and other creative businesses to understand the impact of <abbr title="artificial intelligence">AI</abbr> on these sectors, with a view to mitigating risks and capitalising on opportunities. Significant funding highlighted in the Creative Industries Sector Vision<sup id="fnref:44" role="doc-noteref"><a href="#fn:44" class="govuk-link" rel="footnote">[footnote 44]</a></sup> will help enable <abbr title="artificial intelligence">AI</abbr>-based <abbr title="research and development">R&amp;D</abbr> and innovation in the creative industries.</p> <p>28. Creative industries and media organisations have particular concerns regarding copyright protections in the era of generative <abbr title="artificial intelligence">AI</abbr>. Creative industries and rights holders are concerned at the large-scale use of copyright protected content for training <abbr title="artificial intelligence">AI</abbr> models and have called for assurance that their ability to retain autonomy and control over their valuable work will be protected. At the same time, <abbr title="artificial intelligence">AI</abbr> developers have emphasised that they need to be able to easily access a wide range of high-quality datasets to develop and train cutting-edge <abbr title="artificial intelligence">AI</abbr> systems in the UK.</p> <p>29. The Intellectual Property Office (<abbr title="Intellectual Property Office">IPO</abbr>) convened a working group made up of rights holders and <abbr title="artificial intelligence">AI</abbr> developers on the interaction between copyright and <abbr title="artificial intelligence">AI</abbr>. The working group has provided a valuable forum for stakeholders to share their views. Unfortunately, it is now clear that the working group will not be able to agree an effective voluntary code.</p> <p>30. <abbr title="Department for Science, Innovation and Technology">DSIT</abbr> and <abbr title="Department for Digital, Culture, Media and Sport">DCMS</abbr> ministers will now lead a period of engagement with the <abbr title="artificial intelligence">AI</abbr> and rights holder sectors, seeking to ensure the workability and effectiveness of an approach that allows the <abbr title="artificial intelligence">AI</abbr> and creative sectors to grow together in partnership. The government is committed to the growth of our world-leading creative industries and we recognise the importance of ensuring <abbr title="artificial intelligence">AI</abbr> development supports, rather than undermines, human creativity, innovation, and the provision of trustworthy information.</p> <p>31. Our approach will need to be underpinned by trust and transparency between parties, with greater transparency from <abbr title="artificial intelligence">AI</abbr> developers in relation to data inputs and the attribution of outputs having an important role to play. Our work will therefore also include exploring mechanisms for providing greater transparency so that rights holders can better understand whether content they produce is used as an input into <abbr title="artificial intelligence">AI</abbr> models. The government wants to work closely with rights holders and <abbr title="artificial intelligence">AI</abbr> developers to deliver this. Critical to all of this work will also be close engagement with international counterparts who are also working to address these issues. We will soon set out further proposals on the way forward.</p> <p><strong>Protecting UK citizens from <abbr title="artificial intelligence">AI</abbr>-related bias and discrimination</strong></p> <p>32. <abbr title="artificial intelligence">AI</abbr> has the potential to entrench bias and discrimination<sup id="fnref:45" role="doc-noteref"><a href="#fn:45" class="govuk-link" rel="footnote">[footnote 45]</a></sup>, possibly leading to unfairly negative outcomes for different populations across a range of sectors. For example, unaccounted for bias in an <abbr title="artificial intelligence">AI</abbr>-enabled automated decision making process could result in discriminatory outcomes against specific demographic characteristics in areas such as credit applications<sup id="fnref:46" role="doc-noteref"><a href="#fn:46" class="govuk-link" rel="footnote">[footnote 46]</a></sup> or recruitment<sup id="fnref:47" role="doc-noteref"><a href="#fn:47" class="govuk-link" rel="footnote">[footnote 47]</a></sup>. In line with our fairness principle, the department is working closely with the Equality and Human Rights Commission (<abbr title="Equality and Human Rights Commission">EHRC</abbr>) and <abbr title="Information Commissioner's Office">ICO</abbr> to develop new solutions to address bias and discrimination in <abbr title="artificial intelligence">AI</abbr> systems<sup id="fnref:48" role="doc-noteref"><a href="#fn:48" class="govuk-link" rel="footnote">[footnote 48]</a></sup>.</p> <p>33. Both regulators and public sector bodies are acting to address <abbr title="artificial intelligence">AI</abbr>-related bias and discrimination in their domains. The <abbr title="Information Commissioner's Office">ICO</abbr> has updated guidance on how our strong data protection laws apply to <abbr title="artificial intelligence">AI</abbr> systems that process personal data to include fairness and has continued to hold organisations to account, for example through the issuing of enforcement notices<sup id="fnref:49" role="doc-noteref"><a href="#fn:49" class="govuk-link" rel="footnote">[footnote 49]</a></sup>. The Office of the Police Chief Scientific Adviser published a Covenant for Using <abbr title="artificial intelligence">AI</abbr> in Policing<sup id="fnref:50" role="doc-noteref"><a href="#fn:50" class="govuk-link" rel="footnote">[footnote 50]</a></sup> which has been endorsed by the National Police Chiefs’ Council and should be given due regard by all developers and users of the technology in the sector.</p> <p><strong>Reforming data protection law to support innovation and privacy</strong></p> <p>34. Data is the foundation for modelling, training, and developing <abbr title="artificial intelligence">AI</abbr> systems. But it is critical to respect relevant individual rights and data protection principles should be complied with when processing personal data in <abbr title="artificial intelligence">AI</abbr> systems. The <abbr title="Information Commissioner's Office">ICO</abbr> has demonstrated how they can use data protection law to hold organisations to account through regulatory action and public communications where <abbr title="artificial intelligence">AI</abbr> systems are processing personal data. The UK’s data protection framework, which is being reformed through the Data Protection and Digital Information Bill (<abbr title="Data Protection and Digital Information Bill">DPDI</abbr>), will complement our pro-innovation, proportionate, and context-based approach to regulating <abbr title="artificial intelligence">AI</abbr>.</p> <p>35. Current rules on automated decision-making are confusing and complex, undermining confidence to develop and use innovative technologies. The <abbr title="Data Protection and Digital Information Bill">DPDI</abbr> Bill will expand the lawful bases on which solely automated decisions that have significant effects on individuals can take place and provide a boost in confidence to organisations looking to use the technologies responsibly. It will continue to ensure that data subject rights are protected with safeguards in place. For example, data subjects will be provided with information on such decisions, have the opportunity to make representations, and can request human intervention or contest the decision. This will support innovation and reduce burdens on people and businesses, while maintaining data protection safeguards in line with the UK’s high standards of data protection.</p> <p><strong>Ensuring <abbr title="artificial intelligence">AI</abbr> generated online content is trusted and safe</strong></p> <p>36. The government is committed to ensuring that people have access to accurate information and is supporting all efforts to promote verifiable sources to tackle the spread of false or misleading information. <abbr title="artificial intelligence">AI</abbr> technologies are increasingly able to provide individuals with cheap ways to generate realistic content that can falsely portray people and events. Similarly, <abbr title="artificial intelligence">AI</abbr> may increase volumes of unintentionally false, biased, or harmful content<sup id="fnref:51" role="doc-noteref"><a href="#fn:51" class="govuk-link" rel="footnote">[footnote 51]</a></sup>. This may drive negative public perceptions of information quality and lower overall trust in information sources<sup id="fnref:52" role="doc-noteref"><a href="#fn:52" class="govuk-link" rel="footnote">[footnote 52]</a></sup>.</p> <p>37. We have published emerging practices to protect trust in online information including watermarking and output databases<sup id="fnref:53" role="doc-noteref"><a href="#fn:53" class="govuk-link" rel="footnote">[footnote 53]</a></sup>. We will shortly launch a call for evidence on <abbr title="artificial intelligence">AI</abbr>-related risks to trust in information to develop our understanding of this fast moving and nascent area of technological development, including possible mitigations. This will be aimed at researchers, academics, and civil society organisations with relevant expertise. We will also explore research into the wider and systemic impacts on the information ecosystem and potential solutions. We also continue to engage with news publishers and broadcasters, as vital channels for trustworthy and verifiable information, on the risks of <abbr title="artificial intelligence">AI</abbr> to journalism.</p> <p><strong>Ensuring <abbr title="artificial intelligence">AI</abbr> driven digital markets are competitive</strong></p> <p>38. <abbr title="artificial intelligence">AI</abbr> is creating huge opportunities for innovation that benefits businesses and consumers across the economy. The markets for both the underlying <abbr title="artificial intelligence">AI</abbr> technologies, such as foundation models, and products that use <abbr title="artificial intelligence">AI</abbr> in new and innovative ways, are growing quickly.</p> <p>39. Where these markets are competitive they will drive innovation and better outcomes for businesses and consumers. Successful firms will rightly grow and increase their market share, but it will be important that market power does not become entrenched by only a small number of firms.</p> <p>40. The <abbr title="Competition and Markets Authority">CMA</abbr> will take steps to ensure that <abbr title="artificial intelligence">AI</abbr> markets work well for all. In September 2023, the regulator published an initial review into the market for foundation models<sup id="fnref:54" role="doc-noteref"><a href="#fn:54" class="govuk-link" rel="footnote">[footnote 54]</a></sup>. The report found that, while there will be many benefits to consumers from <abbr title="artificial intelligence">AI</abbr>, these technologies could enable firms to gain or entrench market power. The Digital Markets, Competition and Consumers Bill, which is currently progressing through Parliament, will give the <abbr title="Competition and Markets Authority">CMA</abbr> additional tools to identify and address any competition issues in <abbr title="artificial intelligence">AI</abbr> markets and other digital markets affected by recent developments in <abbr title="artificial intelligence">AI</abbr>.</p> <p><strong>Ensuring <abbr title="artificial intelligence">AI</abbr> best practice in the public sector</strong></p> <p>41. <abbr title="artificial intelligence">AI</abbr> poses enormous opportunities for transforming productivity in the public sector. The UK is already leading the way, ranked third in the Government <abbr title="artificial intelligence">AI</abbr> Readiness Index.<sup id="fnref:55" role="doc-noteref"><a href="#fn:55" class="govuk-link" rel="footnote">[footnote 55]</a></sup> In November 2023, we announced that we are tripling the number of technical <abbr title="artificial intelligence">AI</abbr> engineers and developers within the Cabinet Office to create a new <abbr title="artificial intelligence">AI</abbr> Incubator for the government. These experts will design and implement <abbr title="artificial intelligence">AI</abbr> solutions across government departments to drive improvements in public service delivery. This potential productivity improvement could, for example, save police up to 38 million hours per year and 750,000 hours every week<sup id="fnref:56" role="doc-noteref"><a href="#fn:56" class="govuk-link" rel="footnote">[footnote 56]</a></sup>.</p> <p>42. We are seizing the opportunities presented by <abbr title="artificial intelligence">AI</abbr> to deliver better public services including health, education, and transport. For example, last year the Department of Health and Social Care (<abbr title="Department of Health and Social Care">DHSC</abbr>) and <abbr title="National Health Service">NHS</abbr> launched the £21 million <abbr title="artificial intelligence">AI</abbr> Diagnostic Fund to deploy these technologies in key, high demand areas such as chest X-rays and CT scans<sup id="fnref:57" role="doc-noteref"><a href="#fn:57" class="govuk-link" rel="footnote">[footnote 57]</a></sup>. <abbr title="Department for Education">DfE</abbr> has been examining how to maximise the benefits of <abbr title="artificial intelligence">AI</abbr> in the education sector, including publishing a policy paper and a call for evidence on generative <abbr title="artificial intelligence">AI</abbr> in education<sup id="fnref:58" role="doc-noteref"><a href="#fn:58" class="govuk-link" rel="footnote">[footnote 58]</a></sup>, as well as running a hackathons project to further understand possible use cases. The findings of the hackathons will be published in spring of this year. The Department for Transport (DfT) is focused on the new Automated Vehicles Bill, designed to put the UK at the forefront of regulation of self-driving technology and in a strong position to realise an estimated £42 billion share of the global self-driving market. DfT also plans to publish its first Transport <abbr title="artificial intelligence">AI</abbr> Strategy in 2024, to help both the department and the wider sector to grasp the opportunities and risks presented by new <abbr title="artificial intelligence">AI</abbr> capabilities. Alongside this, the department continues to fund innovative Small and Medium sized Enterprises (<abbr title="small and medium-sized enterprises">SMEs</abbr>) through its Transport Research and Innovation Grants scheme to support the next generation of <abbr title="artificial intelligence">AI</abbr> tools and applications as well as trialling <abbr title="artificial intelligence">AI</abbr> to support fraud identification in its grant-making processes.</p> <p>43. Cabinet Office (<abbr title="Cabinet Office">CO</abbr>) is leading on establishing the necessary underpinnings to drive <abbr title="artificial intelligence">AI</abbr> adoption across the public sector, by improving digital infrastructure and access to data sets, and developing centralised standards. The government is also using the procurement power of the public sector to drive responsible and safe <abbr title="artificial intelligence">AI</abbr> innovation. The Central Digital and Data Office (<abbr title="Central Digital and Data Office">CDDO</abbr>) has published guidance on the procurement and use of generative <abbr title="artificial intelligence">AI</abbr> for the UK government<sup id="fnref:59" role="doc-noteref"><a href="#fn:59" class="govuk-link" rel="footnote">[footnote 59]</a></sup>. Later this year, <abbr title="Department for Science, Innovation and Technology">DSIT</abbr> will launch the <abbr title="artificial intelligence">AI</abbr> Management Essentials scheme, setting a minimum good practice standard for companies selling <abbr title="artificial intelligence">AI</abbr> products and services. We will consult on introducing this as a mandatory requirement for public sector procurement, using purchasing power to drive responsible innovation in the broader economy.</p> <p>44. This builds on the Algorithmic Transparency Recording Standard (<abbr title="Algorithmic Transparency Recording Standard">ATRS</abbr>), which established a standardised way for public sector organisations to proactively publish information about how and why they are using algorithmic methods in decision-making. Following a successful pilot of the standard, and publication of an approved cross-government version last year, we will now be making use of the <abbr title="Algorithmic Transparency Recording Standard">ATRS</abbr> a requirement for all government departments and plan to expand this across the broader public sector over time.</p> <p>45. To inform the secure use of <abbr title="artificial intelligence">AI</abbr> across government, the public sector, and beyond, the National Cyber Security Centre (<abbr title="National Cyber Security Centre">NCSC</abbr>) has published a range of guidance products on the cyber security considerations around using and developing <abbr title="artificial intelligence">AI</abbr><sup id="fnref:60" role="doc-noteref"><a href="#fn:60" class="govuk-link" rel="footnote">[footnote 60]</a></sup>.</p> <h5 id="misuse-risks">Misuse risks</h5> <p><strong>Safeguarding democracy from electoral interference</strong></p> <p>46. The government is committed to strengthening the integrity of elections to ensure that our democracy remains secure, modern, transparent, and fair. <abbr title="artificial intelligence">AI</abbr> has the potential to increase the reach of actors spreading disinformation online, target new audiences more effectively, and generate new types of content that are more difficult to detect<sup id="fnref:61" role="doc-noteref"><a href="#fn:61" class="govuk-link" rel="footnote">[footnote 61]</a></sup>. Our Defending Democracy Taskforce is helping to reduce the threat of foreign interference in our democracy by bringing together a wide range of expertise across government, the intelligence community, and industry. In 2024, the Taskforce will be increasing its engagement with partners, collaborating with devolved governments, the police, local authorities, tech companies, and international partners.</p> <p>47. We will always respond firmly to any threats to the UK’s democracy. The Elections Act 2022 introduced the new digital imprints regime, which will increase the transparency of digital political advertising (including <abbr title="artificial intelligence">AI</abbr>-generated material), by requiring those promoting eligible digital campaigning material targeted at the UK electorate to include an imprint with their name and address. This will empower voters to know who is promoting political material online and on whose behalf. The Elections Act 2022 also revised the offence of undue influence. This will better protect voters from improper influences to vote in a particular way, or to not vote at all, and includes activities that deceive a person in relation to the administration of an election (such as the date of an electoral event or the location of a polling station).</p> <p>48. The Online Safety Act 2023 will capture specific activity aimed at disrupting elections where it is a criminal offence in scope of the regulatory framework. This includes content that contains incitement to violence against electoral candidates and public figures, and the offence of undue influence. The foreign interference offence from the National Security Act 2023 has been added to the Online Safety Act as a “priority offence”, putting new responsibilities on online service providers and capturing attempts by foreign state actors to manipulate our information environment and undermine our democratic, political, and legal processes (including elections). The Online Safety Act has also updated <abbr title="Office of Communications">Ofcom</abbr>’s statutory media literacy duty, requiring the regulator to heighten the public’s awareness of, and resilience to, misinformation and disinformation online.</p> <p>49. We will consider the tools available to verify election-related content. This could include using watermarks to give people confidence in the content they are viewing. It is not just the government that needs to act. We will continue to work with tech companies to ensure that it is possible to report and remove fakes quickly. Building on discussions at the <abbr title="artificial intelligence">AI</abbr> Safety Summit, we are collaborating with international and industry partners to address the shared risk of election interference.</p> <p><strong>Preventing the misuse of <abbr title="artificial intelligence">AI</abbr> technologies</strong></p> <p>50. <abbr title="artificial intelligence">AI</abbr> capabilities may be used maliciously, for example, to perform cyberattacks or design weapons<sup id="fnref:62" role="doc-noteref"><a href="#fn:62" class="govuk-link" rel="footnote">[footnote 62]</a></sup>. Developments in <abbr title="artificial intelligence">AI</abbr> can amplify existing risks by enabling less sophisticated threat actors to carry out more substantial attacks at a larger scale<sup id="fnref:63" role="doc-noteref"><a href="#fn:63" class="govuk-link" rel="footnote">[footnote 63]</a></sup>. We are working with industry, academia, and international partners to find proportionate, practical mitigations to these risks. The 2023 refreshed Biological Security Strategy will ensure that by 2030 the UK is resilient to a spectrum of biological risks and a world leader in responsible innovation<sup id="fnref:64" role="doc-noteref"><a href="#fn:64" class="govuk-link" rel="footnote">[footnote 64]</a></sup>. As set out in the National Vision for Engineering Biology, the government has identified screening of synthetic DNA as a responsible innovation policy priority for 2024<sup id="fnref:65" role="doc-noteref"><a href="#fn:65" class="govuk-link" rel="footnote">[footnote 65]</a></sup>. Prioritising this will allow us to continue reaping the economic rewards of engineering biology in the UK whilst improving the safety of the supply chain.</p> <p>51. Some of the risks presented by <abbr title="artificial intelligence">AI</abbr> systems are manifesting today as these technologies are misused to increase the scale, speed, and success of criminal offences. As discussed above, <abbr title="artificial intelligence">AI</abbr> can provide users with increasing capability to produce false or misleading content. This can include material that constitutes a criminal offence such as fraud, online child sexual abuse, and intimate image abuse. The government has already moved to address some of these issues in the Online Safety Act 2023. Some <abbr title="artificial intelligence">AI</abbr> technologies could be misused to commit identity-related fraud, such as producing false documentation used for immigration purposes. These capabilities present potential risks related to fraudulent access to public funds.</p> <p>52. In order to address the potential criminal use of <abbr title="artificial intelligence">AI</abbr>, we are reviewing the extent to which existing criminal law provides coverage of <abbr title="artificial intelligence">AI</abbr>-enabled offending and harmful behaviour. <abbr title="artificial intelligence">AI</abbr> may also present systemic risks to police capacity, institutional trust, and the evidential process. The government will make amendments to existing legal frameworks as required in order to protect law and order. <abbr title="artificial intelligence">AI</abbr> also poses more opportunities for law enforcement to become more efficient at detecting and preventing crime. As such, these technologies may help mitigate some of the risks of <abbr title="artificial intelligence">AI</abbr>-enabled criminal offences. For example, we are investing in <abbr title="artificial intelligence">AI</abbr> models that allow police to detect and categorise the severity of child abuse images more effectively. We are also exploring how <abbr title="artificial intelligence">AI</abbr> might enable officers to redact large amounts of text evidence more quickly.</p> <p>53. To help organisations develop and use <abbr title="artificial intelligence">AI</abbr> securely, the <abbr title="National Cyber Security Centre">NCSC</abbr> published guidelines for secure <abbr title="artificial intelligence">AI</abbr> system development in November 2023. The government is now looking to build on this and other important publications by releasing a call for views in spring 2024 to obtain further input on our next steps in securing <abbr title="artificial intelligence">AI</abbr> models, including a potential Code of Practice for cyber security of <abbr title="artificial intelligence">AI</abbr>, based on <abbr title="National Cyber Security Centre">NCSC</abbr>’s guidelines. International collaboration in this area is vital if we are to see meaningful change to the security of <abbr title="artificial intelligence">AI</abbr> models, and we will be exploring ways to promote international alignment, such as via international standards.</p> <p>54. This builds on our work to secure personal devices and critical infrastructure. The security regime in the Product Security and Telecommunications Infrastructure (“<abbr title="Product Security and Telecommunications Infrastructure">PSTI</abbr>”) Act, scheduled to come into effect in 2024, will require manufacturers of consumer connectable products, such as <abbr title="artificial intelligence">AI</abbr>-enabled smart speakers, to comply with minimum security requirements underpinned by the secure by design principle. This means no consumer connectable products in scope of the regime can be made available to UK customers unless the manufacturer has minimum security measures in place covering the product’s hardware and software, and, where appropriate, associated <abbr title="artificial intelligence">AI</abbr> solutions. Beyond this, the National Protective Security Authority (<abbr title="National Protective Security Authority">NPSA</abbr>) conducts research to understand how <abbr title="artificial intelligence">AI</abbr> can, and will, enhance physical and personnel security. <abbr title="National Protective Security Authority">NPSA</abbr> advises a wide range of organisations, including critical national infrastructure companies, on how to address <abbr title="artificial intelligence">AI</abbr>-related threats and delivers campaigns to help protect valuable <abbr title="artificial intelligence">AI</abbr>-related intellectual property for emerging technology companies.</p> <h5 id="autonomy-risks">Autonomy risks</h5> <p>55. In our discussion paper on frontier <abbr title="artificial intelligence">AI</abbr> capabilities and risks<sup id="fnref:66" role="doc-noteref"><a href="#fn:66" class="govuk-link" rel="footnote">[footnote 66]</a></sup>, we outlined potential future risks linked to the increasing autonomy of advanced <abbr title="artificial intelligence">AI</abbr> systems. Some experts are concerned that, as <abbr title="artificial intelligence">AI</abbr> systems become more capable across a wider range of tasks, humans will increasingly rely on <abbr title="artificial intelligence">AI</abbr> to make important decisions. Some also believe that, in the future, agentic <abbr title="artificial intelligence">AI</abbr> systems may have the capabilities to actively reduce human control and increase their own influence. New research on the advancing capabilities of agentic <abbr title="artificial intelligence">AI</abbr> demonstrates that we may need to consider potential new measures to address emerging risks as the foundational <abbr title="artificial intelligence">AI</abbr> technologies that underpin a range of applications continue to develop<sup id="fnref:67" role="doc-noteref"><a href="#fn:67" class="govuk-link" rel="footnote">[footnote 67]</a></sup>.</p> <p>56. In section 5.2, we set out proposals for new future responsibilities on developers of highly capable general-purpose <abbr title="artificial intelligence">AI</abbr>. While the likelihood of autonomy risks is debated, we believe that our proposals introduce accountability, governance, and oversight for these developers as well as testing and benchmarking powerful <abbr title="artificial intelligence">AI</abbr> systems to address these risks now and in the future. In particular, the testing conducted by the <abbr title="artificial intelligence">AI</abbr> Safety Institute will identify systems with potentially hazardous capabilities (see sections 5.2 and 5.3 for more details on the role of the Institute). Testing has already begun and will increase in pace over the following months. These initial steps build the UK’s technical capability to assess and respond to emerging <abbr title="artificial intelligence">AI</abbr> risks, ensuring our resilience to future technological developments.</p> <h3 id="examining-the-case-for-new-responsibilities-for-developers-of-highly-capable-general-purpose-ai-systems"> <span class="number">5.2. </span> Examining the case for new responsibilities for developers of highly capable general-purpose <abbr title="artificial intelligence">AI</abbr> systems</h3> <p>57. As noted above, we are seeing rapid progress in the performance of highly capable general-purpose <abbr title="artificial intelligence">AI</abbr> systems. We expect this to continue as organisations develop them with more compute, more data, and more efficient algorithms. Developers do not always know which capabilities a model may exhibit before testing<sup id="fnref:68" role="doc-noteref"><a href="#fn:68" class="govuk-link" rel="footnote">[footnote 68]</a></sup>. Some companies have publicly stated their goal to build <abbr title="artificial intelligence">AI</abbr> systems that are more capable than humans at a range of tasks<sup id="fnref:69" role="doc-noteref"><a href="#fn:69" class="govuk-link" rel="footnote">[footnote 69]</a></sup>. With agentic <abbr title="artificial intelligence">AI</abbr> capabilities on the horizon, we expect further transformative changes to our societies<sup id="fnref:70" role="doc-noteref"><a href="#fn:70" class="govuk-link" rel="footnote">[footnote 70]</a></sup>.</p> <p>58. The Prime Minister set out the government’s approach to managing risk at the frontier of <abbr title="artificial intelligence">AI</abbr> development in October 2023. He stated: “My vision, and our ultimate goal, should be to work towards a more international approach to safety, where we collaborate with partners to ensure <abbr title="artificial intelligence">AI</abbr> systems are safe before they are released<sup id="fnref:71" role="doc-noteref"><a href="#fn:71" class="govuk-link" rel="footnote">[footnote 71]</a></sup>.”</p> <p>59. We set out below how the UK has led the way with a technical approach, securing voluntary agreements on <abbr title="artificial intelligence">AI</abbr> safety with key countries and companies. The new <abbr title="artificial intelligence">AI</abbr> Safety Institute will work with its partners to test the most powerful new <abbr title="artificial intelligence">AI</abbr> systems pre- and post- deployment. As the Prime Minister set out, we will not “rush to regulate” and potentially implement the wrong measures that may insufficiently balance addressing risks and supporting innovation.</p> <p>60. Clearly, if the exponential growth of <abbr title="artificial intelligence">AI</abbr> capabilities continues, and if – as we think could be the case – voluntary measures are deemed incommensurate to the risk, countries will want some binding measures to keep the public safe. Some countries, such as the United States are beginning to explore this through mandatory reporting requirements for the most powerful systems. We have seen significant interventions from leading figures in industry, science, and civil society, highlighting how governments should consider responding to the development<sup id="fnref:72" role="doc-noteref"><a href="#fn:72" class="govuk-link" rel="footnote">[footnote 72]</a></sup> and we welcome continued close collaboration with these expert voices.</p> <p>61. The UK will continue to lead the conversation on effective <abbr title="artificial intelligence">AI</abbr> governance. In the section below, we set out some of the key questions that countries will have to grapple with when deciding how best to manage the risks of highly capable general-purpose <abbr title="artificial intelligence">AI</abbr> systems, such as how to allocate liability across the supply chain and negotiate the open release of the most powerful systems. We will continue to discuss these questions with civil society, industry, and international partners to prepare for the future.</p> <div class="call-to-action"> <h4 id="box-2-what-do-we-mean-by-highly-capable-general-purpose-ai-systems">Box 2: What do we mean by highly capable general-purpose <abbr title="artificial intelligence">AI</abbr> systems?</h4> <p>In the <abbr title="artificial intelligence">AI</abbr> regulation white paper, we defined “foundation models” as “a type of <abbr title="artificial intelligence">AI</abbr> model that is trained on a vast quantity of data and is adaptable for use on a wide range of tasks. Foundation models can be used as a base for building more specific <abbr title="artificial intelligence">AI</abbr> models<sup id="fnref:73" role="doc-noteref"><a href="#fn:73" class="govuk-link" rel="footnote">[footnote 73]</a></sup>.”</p> <p>For the purposes of the <abbr title="artificial intelligence">AI</abbr> Safety Summit, the UK defined “frontier <abbr title="artificial intelligence">AI</abbr>” as highly capable general-purpose <abbr title="artificial intelligence">AI</abbr> models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models.</p> <p>Today, this can include the cutting-edge foundation models that underpin consumer facing applications. However, it is important to note that, both today and in the future, highly capable <abbr title="artificial intelligence">AI</abbr> systems could be underpinned by another technology.</p> <p>In this consultation response, we focus our discussion on future responsibilities for the developers of highly capable general-purpose <abbr title="artificial intelligence">AI</abbr> systems. Developers of these systems currently face the least clear legal responsibilities. The systems have the least coverage by existing regulation while presenting some of the greatest potential risk. This means some of those risks may not be addressed effectively. In the future, our regulatory approach might need to also allocate new responsibilities to developers of highly capable narrow systems as the framework continues to adapt to reflect new technological developments, different risks, or further analysis of accountability across the <abbr title="artificial intelligence">AI</abbr> life cycle.</p> </div> <h4 id="the-regulatory-challenges-of-highly-capable-general-purpose-ai">5.2.1. The regulatory challenges of highly capable general-purpose <abbr title="artificial intelligence">AI</abbr> </h4> <p>62. The <abbr title="artificial intelligence">AI</abbr> regulation white paper outlined a regulatory approach designed to adapt and keep pace with the rapid developments in <abbr title="artificial intelligence">AI</abbr> technology. For the large majority of <abbr title="artificial intelligence">AI</abbr> systems, our view is still that it is more effective to focus on how <abbr title="artificial intelligence">AI</abbr> is used within a specific context than to regulate specific technologies. This is because the level of risk will be determined by where and how <abbr title="artificial intelligence">AI</abbr> is used.</p> <p>63. However, some highly capable <abbr title="artificial intelligence">AI</abbr> systems can present substantial risks. Risk may increase when a highly capable system is general-purpose and can be used in a wide range of applications across different sectors. If a general-purpose <abbr title="artificial intelligence">AI</abbr> system presents a risk of harm, this could mean that multiple sectors or applications could be at risk. This means that a single feature or flaw in one model might result in multiple harms across the whole economy. For example, if an <abbr title="artificial intelligence">AI</abbr> system is used to underpin complex automated processes in both healthcare and recruitment, but the model’s outputs demonstrate bias in a way that is not sufficiently transparent or with impacts that are not adequately mitigated, this could result in discriminatory practices in these different services.</p> <p>64. Highly capable general-purpose <abbr title="artificial intelligence">AI</abbr> systems challenge a context-based approach to regulation as some of the risks that they contribute to may not be effectively mitigated by existing regulation. For example, the cross-sectoral impact of these systems may prevent harms from being sufficiently addressed. Even though some regulators can enforce existing laws against the developers of the most capable general-purpose systems within their current remits<sup id="fnref:74" role="doc-noteref"><a href="#fn:74" class="govuk-link" rel="footnote">[footnote 74]</a></sup>, the wide range of potential uses means that general-purpose systems do not currently fit neatly within the remit of any one regulator, potentially leaving risks without effective mitigations<sup id="fnref:75" role="doc-noteref"><a href="#fn:75" class="govuk-link" rel="footnote">[footnote 75]</a></sup>.</p> <p>65. While some regulators demonstrate advanced approaches to addressing <abbr title="artificial intelligence">AI</abbr> within their remits, many of our current legal frameworks and regulator remits may not effectively mitigate the risks posed by highly capable general-purpose <abbr title="artificial intelligence">AI</abbr> systems. Many regulators in the UK can struggle to enforce existing rules on those actors designing, training, and developing the most powerful general-purpose <abbr title="artificial intelligence">AI</abbr> systems. Similarly, it is not always clear how existing rules can be applied to effectively address the risks that highly capable general-purpose models can present. Existing rules and laws are frequently applied to the deployment or application level of <abbr title="artificial intelligence">AI</abbr>, but the organisations deploying or using these systems may not be well placed to identify, assess, or mitigate the risks they can present. If this is the case, new responsibilities on the developers of highly capable general-purpose models may more effectively address risks.</p> <p>66. Our ongoing work analysing life cycle accountability for <abbr title="artificial intelligence">AI</abbr>, outlined in the white paper, may eventually need to consider the role of other actors across the value chain, such as data or cloud hosting providers, to determine how legal responsibility for <abbr title="artificial intelligence">AI</abbr> may be distributed most fairly and effectively. This analysis will also consider how the unpredictable way future capabilities and risks may emerge could also expose further gaps in the regulatory landscape.</p> <div class="call-to-action"> <p><strong>Case study 1: Liability as a barrier to <abbr title="artificial intelligence">AI</abbr> adoption in the UK</strong></p> <p>“Count Your Pennies Ltd”, a fictional accountancy firm, purchases an “off the shelf” <abbr title="artificial intelligence">AI</abbr> recruitment tool developed by a fictional UK company called “Quantum Talent Technologies”. The tool automatically shortlists candidates based on their application forms.</p> <p>One fictional candidate, Ms Smith, queries why her application was rejected for a certain position given her clear suitability for the role. After receiving an unsatisfactory response from the recruiting manager, she files a discrimination claim. Through the investigation, it becomes clear that the <abbr title="artificial intelligence">AI</abbr> tool is discriminatory. It was built using a powerful foundation model that was developed by a non-UK company and trained on biased historic employment data.</p> <p>It’s common for the law to allocate liability to the last actor in the chain (in this case, “Count Your Pennies Ltd”). In limited circumstances, the law may also allocate liability to the actor immediately above in the supply chain (in this case, “Quantum Talent Technologies”)<sup id="fnref:76" role="doc-noteref"><a href="#fn:76" class="govuk-link" rel="footnote">[footnote 76]</a></sup>.</p> <p>For example, it can be difficult for equality law – which is the statutory framework designed to legally protect people against discrimination in the workplace and in wider society<sup id="fnref:77" role="doc-noteref"><a href="#fn:77" class="govuk-link" rel="footnote">[footnote 77]</a></sup> – to allocate liability to anyone other than the end deployer. This could ultimately lead to harmful outcomes (if the actors most able to address risks and harms are not incentivised or held accountable) and undermine <abbr title="artificial intelligence">AI</abbr> adoption and dampen innovation across the UK economy. We will continue to analyse challenges such as these as part of our ongoing policy work on life cycle accountability for <abbr title="artificial intelligence">AI</abbr>.</p> </div> <p>67. While highly capable narrow <abbr title="artificial intelligence">AI</abbr> systems are in scope of the regulatory framework for <abbr title="artificial intelligence">AI</abbr>, these systems may require a different set of interventions if they present potentially dangerous capabilities. Narrow systems are more likely than general-purpose systems to be subject to effective regulation within the remit of an existing regulator. We will continue to gather evidence on whether the specialised nature of highly capable narrow systems demands a different approach to general-purpose systems</p> <h4 id="the-role-of-voluntary-measures-in-initially-building-an-effective-and-targeted-regulatory-approach">5.2.2. The role of voluntary measures in initially building an effective and targeted regulatory approach</h4> <p>68. We have already started to make the world safer today by securing commitments from leading <abbr title="artificial intelligence">AI</abbr> companies on voluntary measures. Building on voluntary commitments brokered by the White House, the Secretary of State for Science, Innovation and Technology wrote to seven frontier <abbr title="artificial intelligence">AI</abbr> companies prior to the <abbr title="artificial intelligence">AI</abbr> Safety Summit requesting that they publish their safety policies. All seven companies published their policies before the <abbr title="artificial intelligence">AI</abbr> Safety Summit, increasing transparency within the <abbr title="artificial intelligence">AI</abbr> community and encouraging safe industry practice<sup id="fnref:78" role="doc-noteref"><a href="#fn:78" class="govuk-link" rel="footnote">[footnote 78]</a></sup>. We also published a report on emerging processes for frontier <abbr title="artificial intelligence">AI</abbr> safety to inform the future development of safety policies (see Box 3)<sup id="fnref:79" role="doc-noteref"><a href="#fn:79" class="govuk-link" rel="footnote">[footnote 79]</a></sup>. In 2024, we will encourage <abbr title="artificial intelligence">AI</abbr> companies to develop their <abbr title="artificial intelligence">AI</abbr> safety and responsible capability scaling policies<sup id="fnref:80" role="doc-noteref"><a href="#fn:80" class="govuk-link" rel="footnote">[footnote 80]</a></sup>. As part of this work, we will update our emerging processes guide by the end of the year.</p> <div class="call-to-action"> <h4 id="box-3-emerging-processes-for-frontier-ai-safety">Box 3: Emerging Processes for Frontier <abbr title="artificial intelligence">AI</abbr> Safety</h4> <p>Ahead of the <abbr title="artificial intelligence">AI</abbr> Safety Summit, the UK government outlined a set of emerging safety processes to provide information to companies on how they can ensure and maintain the safety of <abbr title="artificial intelligence">AI</abbr> technologies.</p> <p>The document covers nine emerging processes:</p> <p>1. Responsible Capability Scaling - a framework for managing risk as organisations scale the capability of frontier <abbr title="artificial intelligence">AI</abbr> systems, enabling companies to prepare for potential future, more dangerous <abbr title="artificial intelligence">AI</abbr> risks before they occur.</p> <p>2. Model Evaluations and Red Teaming - methods to assess the risks <abbr title="artificial intelligence">AI</abbr> systems pose and inform better decisions about training, securing, and deploying them.</p> <p>3. Model Reporting and Information Sharing - practices that increase government visibility of frontier <abbr title="artificial intelligence">AI</abbr> development and deployment and enable users to make well-informed choices about whether and how to use <abbr title="artificial intelligence">AI</abbr> systems.</p> <p>4. Security Controls including Securing Model Weights - measures such as cyber security and other security controls that underpin <abbr title="artificial intelligence">AI</abbr> system security.</p> <p>5. Reporting Structure for Vulnerabilities - a process to enable outsiders to identify safety and security issues in an <abbr title="artificial intelligence">AI</abbr> system.</p> <p>6. Identifiers of <abbr title="artificial intelligence">AI</abbr>-generated Material - tools to mitigate the creation and distribution of deceptive <abbr title="artificial intelligence">AI</abbr>-generated content by providing information about whether content has been <abbr title="artificial intelligence">AI</abbr> generated or modified.</p> <p>7. Prioritising Research on Risks Posed by <abbr title="artificial intelligence">AI</abbr> - research processes to identify and address the emerging risks posed by frontier <abbr title="artificial intelligence">AI</abbr>.</p> <p>8. Preventing and Monitoring Model Misuse - practices to identify and prevent intentional misuse of <abbr title="artificial intelligence">AI</abbr> systems.</p> <p>9. Data Input Controls and Audits - measures to identify and manage training data that is likely to increase the dangerous capabilities their frontier <abbr title="artificial intelligence">AI</abbr> systems possess, and the risks they pose.</p> <p>The document consolidated emerging thinking in <abbr title="artificial intelligence">AI</abbr> safety from research institutes and academia, companies, and civil society, who the UK government collaborated and engaged with throughout its development. <abbr title="artificial intelligence">AI</abbr> safety is an ongoing project and the processes and practices will continue to evolve through research and dialogue between governments and the broader <abbr title="artificial intelligence">AI</abbr> ecosystem. The document provides a useful starting point for future frameworks for action both in the UK and globally.</p> </div> <p>69. Alongside these voluntary measures, at the <abbr title="artificial intelligence">AI</abbr> Safety Summit, governments and <abbr title="artificial intelligence">AI</abbr> companies agreed that both parties have a crucial role to play in testing the next generation of <abbr title="artificial intelligence">AI</abbr> models, to ensure <abbr title="artificial intelligence">AI</abbr> safety – both before and after models are deployed. In the UK, the newly established <abbr title="artificial intelligence">AI</abbr> Safety Institute (see Box 4) leads this work. Leading <abbr title="artificial intelligence">AI</abbr> tech companies have pledged to provide the Institute with priority access to their systems. The Institute has already begun testing, and is committed to doing so in partnership with other countries and their respective safety institutes. We will shortly provide an update on the <abbr title="artificial intelligence">AI</abbr> Safety Institute’s approach to evaluations. Our assessment of the capabilities and risks of <abbr title="artificial intelligence">AI</abbr> will also be underpinned by a new International Report on the Science of <abbr title="artificial intelligence">AI</abbr> Safety<sup id="fnref:81" role="doc-noteref"><a href="#fn:81" class="govuk-link" rel="footnote">[footnote 81]</a></sup>, chaired by leading <abbr title="artificial intelligence">AI</abbr> pioneer Yoshua Bengio (see paragraph 87).</p> <div class="call-to-action"> <h4 id="box-4-the-ai-safety-institute-aisi">Box 4: The <abbr title="artificial intelligence">AI</abbr> Safety Institute (AISI)</h4> <p>At present, frontier <abbr title="artificial intelligence">AI</abbr> developers are building powerful systems that outpace the ability of government and regulators to make them safe. As such, the government’s first challenge is one of knowledge: we do not fully understand what the most powerful systems are capable of and we urgently need to plug that gap. This will be the task of the new <abbr title="artificial intelligence">AI</abbr> Safety Institute. It will advance the world’s knowledge of <abbr title="artificial intelligence">AI</abbr> safety by carefully examining, evaluating, and testing new frontier <abbr title="artificial intelligence">AI</abbr> systems. In addition, it will research new techniques for understanding and mitigating <abbr title="artificial intelligence">AI</abbr> risk, and conduct fundamental research on how to keep people safe in the face of fast and unpredictable progress in <abbr title="artificial intelligence">AI</abbr>.</p> <p>The <abbr title="artificial intelligence">AI</abbr> Safety Institute’s work will be fundamental to informing the UK’s regulatory framework. It will provide foundational insights to our governance regime and help ensure that the UK takes an evidence-based, proportionate approach to regulating the risks of <abbr title="artificial intelligence">AI</abbr>. It will initially perform three core functions:</p> <ul> <li> <strong>Develop and conduct evaluations on advanced <abbr title="artificial intelligence">AI</abbr> systems</strong>, aiming to characterise safety-relevant capabilities, understand the safety and security of systems, and assess their societal impacts.</li> <li> <strong>Drive foundational <abbr title="artificial intelligence">AI</abbr> safety research</strong>. The Institute’s research will support short and long-term <abbr title="artificial intelligence">AI</abbr> governance. It will ensure the UK’s iterative regulatory framework for <abbr title="artificial intelligence">AI</abbr> is informed by the latest expertise and lay the foundation for technically grounded international governance of advanced <abbr title="artificial intelligence">AI</abbr>. Projects will range from rapid development of tools to inform governance, to exploratory <abbr title="artificial intelligence">AI</abbr> safety research which may be underexplored by industry.</li> <li> <strong>Facilitate information exchange</strong>, including by establishing – on a voluntary basis and subject to existing privacy and data regulation – clear information-sharing channels between the Institute and other national and international actors, such as policymakers, international partners, private companies, academia, civil society, and the broader public.</li> </ul> <p>The goal of the Institute’s evaluations will not be to designate any particular <abbr title="artificial intelligence">AI</abbr> system as “safe”; it is not clear that available techniques could justify such a definitive determination. The <abbr title="artificial intelligence">AI</abbr> Safety Institute is not a regulator; its role is to develop the technical expertise to understand the capabilities and risks of <abbr title="artificial intelligence">AI</abbr> systems, informing the government’s broader actions. Nevertheless, we expect progress in system evaluations to enable better informed decision making by governments and companies and act as an early warning system for some of the most concerning risks. If the <abbr title="artificial intelligence">AI</abbr> Safety Institute identifies a potentially dangerous capability through its evaluation of advanced <abbr title="artificial intelligence">AI</abbr> systems, the Institute will, where appropriate, address risks by engaging the developer on suitable safety mitigations and collaborating with the government’s established <abbr title="artificial intelligence">AI</abbr> risk management and regulatory architecture.</p> <p>The Institute is focused on the most advanced current <abbr title="artificial intelligence">AI</abbr> capabilities and any future developments. It will consider open source systems as well as those deployed with various forms of access controls.</p> </div> <p>70. These voluntary actions allow us to test and learn what works in order to adapt our regulatory approach. We will strengthen our technical understanding to build wider consensus on key interventions, such as whether there should be conditions in which it would be right to pause the development of specific systems, as some have proposed.</p> <p>71. While voluntary measures help us make <abbr title="artificial intelligence">AI</abbr> safer now, the intense competition between companies to release ever-more-capable systems means we will need to remain highly vigilant to meaningful compliance, accountability, and effective risk mitigation. It may be the case that commercial incentives are not always aligned with the public good. If the market evolves such that there are a larger number of firms that are building highly capable systems, the governance of voluntary approaches will be much harder<sup id="fnref:82" role="doc-noteref"><a href="#fn:82" class="govuk-link" rel="footnote">[footnote 82]</a></sup>. It will also be increasingly important to ensure the right accountability mechanisms and corporate governance frameworks are in place for companies building the most powerful systems.</p> <h4 id="the-case-for-future-binding-measures">5.2.3. The case for future binding measures</h4> <p>72. The section above highlights how the context-based approach may miss significant risks posed by highly capable general-purpose systems and leave the developers of those systems unaccountable. Whilst voluntary measures are a useful tool to address risks today, we anticipate that all jurisdictions will, in time, want to place targeted mandatory interventions on the design, development, and deployment of such systems to ensure risks are adequately addressed.</p> <h4 id="foundation-model-supply-chain">Foundation model supply chain</h4> <figure class="image embedded"><div class="img"><img src="https://assets.publishing.service.gov.uk/media/65afa2a1bc0de30013187327/foundation_model_supply_chain.svg" alt=""></div> <figcaption><p>Foundation model supply chain.</p></figcaption></figure> <p>Note: This is one possible model (there will not always be a separate or single company at each layer).</p> <p><strong>Image 2</strong>: A diagram of the foundation model supply chain taken from the Ada Lovelace Institute’s ‘<a rel="external" href="https://www.adalovelaceinstitute.org/resource/foundation-models-explainer/" class="govuk-link">Foundation Models Explainer</a>’.</p> <p>While there are many different ways to understand and describe the often complex life cycles of <abbr title="artificial intelligence">AI</abbr> technologies, this diagram illustrates that our proposed future measures would be clearly targeted at the small number of companies that work in the foundation model developer layer building highly capable general-purpose <abbr title="artificial intelligence">AI</abbr>.</p> <p>73. Predicting which systems are capable enough to lead to significant risk is not straightforward. In line with our proportionate approach, any future regulation would be targeted at the small number of developers of the most powerful general-purpose systems. We propose to do this by establishing dynamic thresholds that can quickly respond to advances in <abbr title="artificial intelligence">AI</abbr> development. Our preliminary analysis indicates that initial thresholds could be based on forecasts of capabilities using a combination of two proxies: compute (i.e. the amount of compute used to train the model) and capability benchmarking (i.e. assessing capabilities in certain risk areas to identify where we think high capabilities result in high risk). At least for the time being, the combination of these proxies can predict <abbr title="artificial intelligence">AI</abbr> capabilities reasonably well, however there might need to be a range of thresholds.</p> <p>74. Any new obligations would ensure that the developers of the in-scope systems adhere to the principles set out in the <abbr title="artificial intelligence">AI</abbr> regulation white paper including safety, security, transparency, fairness, and accountability. This could include transparency measures (for example, relating to the data that systems are trained on); risk management, accountability, and corporate governance related obligations; or actions to address potential harms, such as those caused by misuse or unfair bias before or after training.</p> <p>75. The open release of <abbr title="artificial intelligence">AI</abbr> has, overall, been beneficial for innovation, transparency, and accountability. A degree of openness in <abbr title="artificial intelligence">AI</abbr> is, and will continue to be, critical to scientific progress, and we recognise that openness is core to our society and culture. However, while we are committed to defending the value of openness, we note that there is a balance to strike as we seek to mitigate potential risks. In this regard, we see an emerging consensus on the need to explore pre-deployment capability testing and risk assessment for the most powerful <abbr title="artificial intelligence">AI</abbr> systems, including where systems might be released openly. Pre-deployment testing could inform the deployment options available for a model and change the risk prevention steps required of organisations prior to the model’s release. Recognising the complexity of the debate, we are working closely with the open source community and <abbr title="artificial intelligence">AI</abbr> developers to understand their needs. Our engagement with those developing and using <abbr title="artificial intelligence">AI</abbr> models that are highly capable, general-purpose, and open access will allow us to explore the need for nuanced and targeted policy options that minimise any negative impacts on valuable open source activity, whilst mitigating risks.</p> <p>76. The challenges posed by <abbr title="artificial intelligence">AI</abbr> will ultimately require legislative action in every country once understanding of risk has matured. Introducing binding measures too soon, even if highly targeted, could fail to effectively address risks, quickly become out of date, or stifle innovation and prevent people from across the UK from benefiting from <abbr title="artificial intelligence">AI</abbr> in line with the adaptable approach set out in the <abbr title="artificial intelligence">AI</abbr> regulation white paper, the government would consider introducing binding measures if we determined that existing mitigations were no longer adequate and we had identified interventions that would mitigate risks in a targeted way. As with any decision to legislate, the government would only consider introducing legislation if we were not sufficiently confident that voluntary measures would be implemented effectively by all relevant parties and if we assessed that risks could not be effectively mitigated using existing legal powers. Finally, prior to legislating, the government would need to be confident that we could mandate measures in a way that would significantly mitigate risk without unduly dampening innovation and competition.</p> <p>77. We know there is more work to do to refine our approach to regulating the most capable <abbr title="artificial intelligence">AI</abbr> systems and the actors that design, develop, and deploy them. We look forward to developing our proposals by working closely with industry, academia, civil society, and the wider public. In Box 5, below, we set out the key questions that will guide our policy development.</p> <div class="call-to-action"> <h4 id="box-5-key-questions-for-policy-development-on-the-future-regulation-of-highly-capable-general-purpose-systems">Box 5: Key questions for policy development on the future regulation of highly capable general-purpose systems</h4> <p>Building on the evidence we received to our <abbr title="artificial intelligence">AI</abbr> regulation white paper consultation on the topic of life cycle accountability and foundation models, over the coming months we will work closely with a range of experts and international partners to examine the questions below. We will publish findings from this engagement in a series of expert discussion papers. We will also publish the next iteration of our thinking and the steps we are taking in relation to the most capable <abbr title="artificial intelligence">AI</abbr> systems.</p> <ul> <li> <p>Which specific risks should be addressed through future regulatory interventions targeted at highly capable <abbr title="artificial intelligence">AI</abbr> systems? How do we ensure the regime is resilient to future developments?</p> </li> <li> <p>When should the government and regulators intervene? Which systems should we be targeting? What would a compound threshold for intervention look like? Is compute a useful proxy for now, if thresholds remain dynamic? What about capability benchmarking?</p> </li> <li> <p>Which obligations should be imposed on developers? Should the obligations be linked to our <abbr title="artificial intelligence">AI</abbr> regulation principles? How do we ensure that the obligations are flexible but clear? At what stage could it be necessary to pause model development?</p> </li> <li> <p>What, if any, new regulatory powers are required? How would this work alongside the existing regulatory landscape?</p> </li> <li> <p>What should enforcement of any new regulation look like? What legal responsibilities should developers of in-scope systems have? Are updates to civil or criminal liability frameworks needed?</p> </li> <li> <p>How do we provide regulatory certainty to drive responsible <abbr title="artificial intelligence">AI</abbr> innovation while retaining an adaptable regime that can accommodate fast technical developments? How do we avoid creating barriers to market entry and scale-up?</p> </li> <li> <p>Should certain capabilities trigger controls on open release? What would the negative consequences be? How should thresholds be set? What controls could be imposed?</p> </li> <li> <p>What are the roles of existing transparency and accountability frameworks? How can strong transparency and good accountability be encouraged or assured to support responsible development of the most capable <abbr title="artificial intelligence">AI</abbr> systems?</p> </li> <li> <p>Should developers of highly capable <abbr title="artificial intelligence">AI</abbr> systems be subject to specific  corporate governance requirements? Is there a role for requirements on developers of highly capable <abbr title="artificial intelligence">AI</abbr> systems to consider and mitigate risks to society or humanity at large?</p> </li> <li> <p>How do potential new measures on highly capable <abbr title="artificial intelligence">AI</abbr> systems link to wider life cycle accountability for <abbr title="artificial intelligence">AI</abbr>? Are other actors in the <abbr title="artificial intelligence">AI</abbr> value chain also hard for regulators to reach in a way that hampers our ability to address risk and support <abbr title="artificial intelligence">AI</abbr> innovation and adoption?</p> </li> </ul> </div> <p>78. As we set out in the <abbr title="artificial intelligence">AI</abbr> regulation white paper, our intention is for our regulatory framework to apply to the whole of the UK subject to existing exemptions and derogations for unique operating requirements, such as defence and national security. However, we recognise that <abbr title="artificial intelligence">AI</abbr> is used across a wide variety of sectors, some of which are reserved and some of which are devolved. As our policy develops and we consider the introduction of binding requirements on the developers of the most capable general-purpose systems, we will continue to assess any devolution impacts and need for extraterritorial reach.</p> <p>79. We are committed to engaging the territorial offices and devolved administrations on both the design and delivery of the regulatory framework, so that businesses and citizens across the UK benefit from our regulatory approach.</p> <h3 id="working-with-international-partners-to-promote-effective-collaboration-on-ai-governance"> <span class="number">5.3. </span> Working with international partners to promote effective collaboration on <abbr title="artificial intelligence">AI</abbr> governance</h3> <p>80. <abbr title="artificial intelligence">AI</abbr> knows no borders and its impact will shape societies and economies in all corners of the world: <abbr title="artificial intelligence">AI</abbr> developed in one nation will increasingly affect the lives of citizens living in others. Effective governance of <abbr title="artificial intelligence">AI</abbr> will therefore require equally impactful international cooperation, which must build on the work of existing multilateral and multi-stakeholder fora and initiatives.</p> <p>81. The UK is an established global leader in <abbr title="artificial intelligence">AI</abbr> with a history of driving forward the international conversation and taking clear, decisive action to build bilateral and multilateral agreement. Our focus to date has been on collaborative action to support the development of <abbr title="artificial intelligence">AI</abbr> in line with the context-based framework and principles set out in the <abbr title="artificial intelligence">AI</abbr> regulation white paper<sup id="fnref:83" role="doc-noteref"><a href="#fn:83" class="govuk-link" rel="footnote">[footnote 83]</a></sup>. This involves working alongside different groups of countries in accordance with need and acting in a targeted and proportionate manner. Our goal remains to work with others to build an international community that is able to realise the opportunities of <abbr title="artificial intelligence">AI</abbr> on a global scale. We promote our values and collaborate where suitable to address the most pressing current and future <abbr title="artificial intelligence">AI</abbr>-related risks. We carefully balance safety and innovation, acting alongside our partners to promote the international design, development, deployment, and use of the highest potential <abbr title="artificial intelligence">AI</abbr> systems.</p> <p>82. We will continue to act through bilateral partnerships and multilateral initiatives – including future <abbr title="artificial intelligence">AI</abbr> Safety Summits – to promote safe, secure, and trustworthy <abbr title="artificial intelligence">AI</abbr>, underpinned by effective international <abbr title="artificial intelligence">AI</abbr> governance. Throughout this we will adopt a multistakeholder approach: We will collaborate with our international partners by working with representatives from industry, academia, civil society, and government to ensure we can reap the extraordinary benefits afforded by these technologies<sup id="fnref:84" role="doc-noteref"><a href="#fn:84" class="govuk-link" rel="footnote">[footnote 84]</a></sup>.</p> <p>83. Working with these networks, we will unlock the opportunities presented by <abbr title="artificial intelligence">AI</abbr> while addressing potential risks. In support of this, we maintain close relationships with our international partners across the full range of issues detailed in section 5.1, as well as on our respective emerging domestic approaches.</p> <p>84. Domestic and international approaches must develop in tandem. In developing our own approach to <abbr title="artificial intelligence">AI</abbr> regulation we will, therefore, both influence and respond to international developments. We will continue to proactively engage with the international landscape to ensure the appropriate degree of cooperation required for effective <abbr title="artificial intelligence">AI</abbr> governance. We will achieve appropriate levels of coherence with other regulatory regimes, promote safety, and minimise potential barriers to trade – maximising opportunities for individuals and businesses across the UK and beyond. We will continue to work with our international partners to drive the development and adoption of tools for trustworthy <abbr title="artificial intelligence">AI</abbr>, such as assurance techniques and global technical standards, in order to promote interoperability and avoid fragmentation.</p> <p>85. We will continue to recognise the critical nature of safety in underpinning, but not supplanting, all other aspects of international <abbr title="artificial intelligence">AI</abbr> collaboration. As the Prime Minister Rishi Sunak set out, our “vision, and our ultimate goal, should be to work towards a more international approach to safety”<sup id="fnref:85" role="doc-noteref"><a href="#fn:85" class="govuk-link" rel="footnote">[footnote 85]</a></sup>. As noted above, the UK hosted the first ever <abbr title="artificial intelligence">AI</abbr> Safety Summit in November 2023 and secured the Bletchley Declaration, a landmark agreement between 29 parties, including 28 countries from across the globe and the European Union<sup id="fnref:86" role="doc-noteref"><a href="#fn:86" class="govuk-link" rel="footnote">[footnote 86]</a></sup>. The Declaration builds a shared understanding of the opportunities and risks that <abbr title="artificial intelligence">AI</abbr> presents and the need for collaborative action to ensure the safety of the most powerful <abbr title="artificial intelligence">AI</abbr> systems now and in the future. A number of countries and companies developing frontier <abbr title="artificial intelligence">AI</abbr> also agreed to state-led testing of the next generation of systems, including through partnerships with newly announced <abbr title="artificial intelligence">AI</abbr> Safety Institutes (see Box 4 for more detail)<sup id="fnref:87" role="doc-noteref"><a href="#fn:87" class="govuk-link" rel="footnote">[footnote 87]</a></sup>.</p> <p>86. The pace of <abbr title="artificial intelligence">AI</abbr> development shows no sign of slowing down, so the UK is committed to establishing enduring international collaboration on <abbr title="artificial intelligence">AI</abbr> safety, building on the foundations of the <abbr title="artificial intelligence">AI</abbr> Safety Summit agreements. To maintain this momentum and ensure that action is taken to secure <abbr title="artificial intelligence">AI</abbr> safety, the Republic of Korea has agreed to co-host the next <abbr title="artificial intelligence">AI</abbr> Safety Summit with the UK. France has agreed to host the following summit.</p> <p>87. The UK’s <abbr title="artificial intelligence">AI</abbr> Safety Institute represents one of our key contributions to international collaboration on <abbr title="artificial intelligence">AI</abbr>. The Institute will partner with other countries to facilitate collaboration between governments on <abbr title="artificial intelligence">AI</abbr> safety testing and governance, and develop their own capability. The Institute will facilitate international collaboration in three key ways:</p> <ul> <li> <p><strong>Partnerships</strong>: the <abbr title="artificial intelligence">AI</abbr> Safety Institute has agreed a partnership with the US <abbr title="artificial intelligence">AI</abbr> Safety Institute and with the government of Singapore to collaborate on <abbr title="artificial intelligence">AI</abbr> safety testing and is in regular dialogue on <abbr title="artificial intelligence">AI</abbr> safety issues with international partners.</p> </li> <li> <p><strong>International Report on the Science of <abbr title="artificial intelligence">AI</abbr> Safety</strong><sup id="fnref:88" role="doc-noteref"><a href="#fn:88" class="govuk-link" rel="footnote">[footnote 88]</a></sup>: The report was first unveiled as the State of the Science Report at the UK <abbr title="artificial intelligence">AI</abbr> Safety Summit in November, where represented countries agreed to the development of an internationally authored report on the capabilities and risks of advanced <abbr title="artificial intelligence">AI</abbr>. Rather than producing new material, it will summarise the best of existing research and identify priority research areas, providing a synthesis of the existing knowledge of risks from advanced <abbr title="artificial intelligence">AI</abbr>.</p> </li> <li> <p><strong>Information Exchange</strong>: the <abbr title="artificial intelligence">AI</abbr> Safety Institute’s evaluations and research are the first step in addressing the insight gaps between industry, governments, academia, and the public. This will ensure relevant parties, including international partners, receive the information they need to inform the development of shared protocols.</p> </li> </ul> <p>88. The UK also plays a proactive role through a range of multilateral initiatives to drive forward our ambition to promote the safe and responsible design, development, deployment, and use of <abbr title="artificial intelligence">AI</abbr>. This includes:</p> <ul> <li> <p><strong><abbr title="Group of Seven">G7</abbr></strong>: Working in cooperation with our partners in this forum, the UK has made significant progress to quickly respond to new technological developments and drive work on effective international <abbr title="artificial intelligence">AI</abbr> governance. In December 2023, under Japan’s Presidency, <abbr title="Group of Seven">G7</abbr> Leaders welcomed the Hiroshima <abbr title="artificial intelligence">AI</abbr> Process Comprehensive Policy Framework that includes international guiding principles for all <abbr title="artificial intelligence">AI</abbr> actors and a Code of Conduct for organisations developing advanced <abbr title="artificial intelligence">AI</abbr> systems, as well as a work plan to further advance these outcomes<sup id="fnref:89" role="doc-noteref"><a href="#fn:89" class="govuk-link" rel="footnote">[footnote 89]</a></sup>. We encourage <abbr title="artificial intelligence">AI</abbr> actors, and especially <abbr title="artificial intelligence">AI</abbr> developers, to further engage and support these outcomes. We look forward to collaborating further on <abbr title="artificial intelligence">AI</abbr> under Italy’s <abbr title="Group of Seven">G7</abbr> Presidency in 2024.</p> </li> <li> <p><strong><abbr title="Group of 20">G20</abbr></strong>: In September 2023, as part of India’s <abbr title="Group of 20">G20</abbr> Presidency, the UK Prime Minister agreed to and endorsed the New Delhi Leaders’ Declaration alongside all other <abbr title="Group of 20">G20</abbr> Members<sup id="fnref:90" role="doc-noteref"><a href="#fn:90" class="govuk-link" rel="footnote">[footnote 90]</a></sup>. The Declaration reaffirmed the UK’s commitment to the 2019 <abbr title="Group of 20">G20</abbr> <abbr title="artificial intelligence">AI</abbr> Principles and emphasised the importance of a governance approach that balances the benefits and risks of <abbr title="artificial intelligence">AI</abbr> and promotes responsible <abbr title="artificial intelligence">AI</abbr> for achieving the <abbr title="United Nations">UN</abbr> Sustainable Development Goals<sup id="fnref:91" role="doc-noteref"><a href="#fn:91" class="govuk-link" rel="footnote">[footnote 91]</a></sup>. The UK will work closely with Brazil on their <abbr title="artificial intelligence">AI</abbr> ambitions as part of their 2024 <abbr title="Group of 20">G20</abbr> Presidency, which will centre on <abbr title="artificial intelligence">AI</abbr> for inclusive sustainable development.</p> </li> <li> <p><strong>Global Partnership on <abbr title="artificial intelligence">AI</abbr> (<abbr title="Global Partnership on AI">GPAI</abbr>)</strong>: The UK continues to actively shape <abbr title="Global Partnership on AI">GPAI</abbr>’s multi-stakeholder project-based activities to guide the responsible development and use of <abbr title="artificial intelligence">AI</abbr> grounded in human rights, inclusion, diversity, innovation, and economic growth. The UK was pleased to attend the December 2023 <abbr title="Global Partnership on AI">GPAI</abbr> Summit in New Delhi, represented by the Minister for <abbr title="artificial intelligence">AI</abbr>, Viscount Camrose, and to both endorse the <abbr title="Global Partnership on AI">GPAI</abbr> New Delhi Ministerial Declaration<sup id="fnref:92" role="doc-noteref"><a href="#fn:92" class="govuk-link" rel="footnote">[footnote 92]</a></sup> and host a side-event on outcomes and next steps following the <abbr title="artificial intelligence">AI</abbr> Safety Summit. The UK has also begun a two-year mandate as a Steering Committee member and will work with India’s Chairmanship to ensure <abbr title="Global Partnership on AI">GPAI</abbr> is reaching its full potential.</p> </li> <li> <p><strong>Council of Europe</strong>: The UK is continuing to work closely with like-minded nations on the proposed Council of Europe Convention on <abbr title="artificial intelligence">AI</abbr> to help protect human rights, democracy, and rule of law. The Convention offers an opportunity to ensure these important values are codified internationally as one part of a wider approach to effective international governance.</p> </li> <li> <p><strong>Organisation for Economic Co-operation and Development (<abbr title="Organisation for Economic Co-operation and Development">OECD</abbr>)</strong>: The UK is an active member of the Working Party on <abbr title="artificial intelligence">AI</abbr> Governance (<abbr title="artificial intelligence Governance">AIGO</abbr>) and recognises the forum’s role in supporting the implementation of the <abbr title="Organisation for Economic Co-operation and Development">OECD</abbr> <abbr title="artificial intelligence">AI</abbr> Principles and enabling the exchange of experience and best practice across member countries. In 2024, the UK will support the revision of the <abbr title="Organisation for Economic Co-operation and Development">OECD</abbr> <abbr title="artificial intelligence">AI</abbr> Principles<sup id="fnref:93" role="doc-noteref"><a href="#fn:93" class="govuk-link" rel="footnote">[footnote 93]</a></sup> and continue to provide case studies from the UK’s Portfolio of <abbr title="artificial intelligence">AI</abbr> Assurance Techniques<sup id="fnref:94" role="doc-noteref"><a href="#fn:94" class="govuk-link" rel="footnote">[footnote 94]</a></sup> to the <abbr title="Organisation for Economic Co-operation and Development">OECD</abbr>’s Catalogue of Tools and Metrics of Tools for Trustworthy <abbr title="artificial intelligence">AI</abbr><sup id="fnref:95" role="doc-noteref"><a href="#fn:95" class="govuk-link" rel="footnote">[footnote 95]</a></sup>.</p> </li> <li> <p><strong>United Nations (<abbr title="United Nations">UN</abbr>) and its associated agencies</strong>: Given the organisation’s unique role in convening a wide range of nations, the UK recognises the value of the <abbr title="United Nations">UN</abbr>-led discussions on <abbr title="artificial intelligence">AI</abbr> and engages regularly to shape global norms on <abbr title="artificial intelligence">AI</abbr>. In July 2023, the UK initiated and chaired the first <abbr title="United Nations">UN</abbr> Security Council briefing session on <abbr title="artificial intelligence">AI</abbr>, and the Deputy Prime Minister chaired a session on frontier <abbr title="artificial intelligence">AI</abbr> risks at <abbr title="United Nations">UN</abbr> High Level Week in September 2023. The UK continues to collaborate with a range of partners across <abbr title="United Nations">UN</abbr> <abbr title="artificial intelligence">AI</abbr> initiatives, including negotiations for the Global Digital Compact, which aims to facilitate the Sustainable Development Goals through technologies such as <abbr title="artificial intelligence">AI</abbr>, monitoring the implementation of the <abbr title="United Nations Educational, Scientific and Cultural Organization">UNESCO</abbr> Recommendation on the Ethics of <abbr title="artificial intelligence">AI</abbr><sup id="fnref:96" role="doc-noteref"><a href="#fn:96" class="govuk-link" rel="footnote">[footnote 96]</a></sup>, and engaging constructively at the International Telecommunication Union, which hosted the ‘<abbr title="artificial intelligence">AI</abbr> for Good’ Summit in July 2023. The UK will also continue to work closely with the <abbr title="United Nations">UN</abbr> <abbr title="artificial intelligence">AI</abbr> Advisory Body and is closely reviewing its interim report: Governing <abbr title="artificial intelligence">AI</abbr> for Humanity<sup id="fnref:97" role="doc-noteref"><a href="#fn:97" class="govuk-link" rel="footnote">[footnote 97]</a></sup>.</p> </li> <li> <p><strong>Global Standards Development Organisations (<abbr title="Standards Development Organisations">SDOs</abbr>)</strong>: The UK is engaging directly with <abbr title="Standards Development Organisations">SDOs</abbr>, such as the <abbr title="International Organization for Standardization">ISO</abbr> and <abbr title="International Electrotechnical Commission">IEC</abbr>, and is supporting developments in technical <abbr title="artificial intelligence">AI</abbr> standards. The UK champions a global digital standards ecosystem that is open, transparent, and consensus-based. The UK also aims to support innovation and strengthen a multi-stakeholder, industry-led model for the development of technical <abbr title="artificial intelligence">AI</abbr> standards, including through initiatives such as the UK’s <abbr title="artificial intelligence">AI</abbr> Standards Hub<sup id="fnref:98" role="doc-noteref"><a href="#fn:98" class="govuk-link" rel="footnote">[footnote 98]</a></sup>. We support UK stakeholders to participate in <abbr title="Standards Development Organisations">SDOs</abbr> to both leverage the benefits of global technical standards here in the UK and deliver global digital technical standards shaped by democratic values.</p> </li> </ul> <p>89. Additionally, the UK is committed to ensuring that the benefits of <abbr title="artificial intelligence">AI</abbr> are widely accessible. This includes working with international partners to fund safe and responsible <abbr title="artificial intelligence">AI</abbr> projects for development around the world. As announced at the <abbr title="artificial intelligence">AI</abbr> Safety Summit, the UK is contributing £38 million through its new <abbr title="artificial intelligence">AI</abbr> for Development programme to support safe, responsible and inclusive <abbr title="artificial intelligence">AI</abbr> innovation to accelerate progress on development challenges, focused initially in Africa<sup id="fnref:99" role="doc-noteref"><a href="#fn:99" class="govuk-link" rel="footnote">[footnote 99]</a></sup>. This is part of an £80 million boost in <abbr title="artificial intelligence">AI</abbr> programming to combat inequality and boost prosperity in Africa, with the UK working alongside Canada, the Bill and Melinda Gates Foundation, the USA, Google, Microsoft, and African partners, including Kenya, Nigeria, and Rwanda among others.</p> <p>90. <abbr title="artificial intelligence">AI</abbr> is now also fundamental to our bilateral relationships and, in some cases, it is suitable to build deeper and more committed bilateral partnerships alongside multilateral engagement to further our shared interests. We have therefore pursued bilateral agreements on areas including responsibly developing and deploying <abbr title="artificial intelligence">AI</abbr> with key international partners, to build the foundation for further collaboration on <abbr title="artificial intelligence">AI</abbr> governance. For example, as part of the <abbr title="Department for Science, Innovation and Technology">DSIT</abbr> International Science Partnerships Fund<sup id="fnref:100" role="doc-noteref"><a href="#fn:100" class="govuk-link" rel="footnote">[footnote 100]</a></sup>, <abbr title="UK Research and Innovation">UKRI</abbr> will invest £9 million to bring together researchers and innovators in bilateral research partnerships with the US. These partnerships will focus on developing safer, responsible, and trustworthy <abbr title="artificial intelligence">AI</abbr> as well as <abbr title="artificial intelligence">AI</abbr> for scientific uses. Since the publication of the <abbr title="artificial intelligence">AI</abbr> regulation white paper in March 2023 we have signed:</p> <ul> <li> <p><strong>The Atlantic Declaration with the US</strong><sup id="fnref:101" role="doc-noteref"><a href="#fn:101" class="govuk-link" rel="footnote">[footnote 101]</a></sup>: which develops our strong partnership on <abbr title="artificial intelligence">AI</abbr>, underpinned by our shared democratic values and our ambition to promote safe and responsible <abbr title="artificial intelligence">AI</abbr> innovation across the world. Work under the 2023 Atlantic Declaration will ensure that our unique alliance is reinforced for the challenges of new technological developments.</p> </li> <li> <p><strong>The Hiroshima Accord with Japan</strong><sup id="fnref:102" role="doc-noteref"><a href="#fn:102" class="govuk-link" rel="footnote">[footnote 102]</a></sup>: which commits to focus on promoting human-centric and trustworthy <abbr title="artificial intelligence">AI</abbr> and interoperability between our <abbr title="artificial intelligence">AI</abbr> governance frameworks.</p> </li> <li> <p><strong>The Downing Street Accord with the Republic of Korea</strong><sup id="fnref:103" role="doc-noteref"><a href="#fn:103" class="govuk-link" rel="footnote">[footnote 103]</a></sup>: which builds on the progress achieved on safe, responsible <abbr title="artificial intelligence">AI</abbr> development, including at the <abbr title="artificial intelligence">AI</abbr> Safety Summit – the next edition of which will be co-hosted by the Republic of Korea and the UK.</p> </li> <li> <p><strong>The Joint Declaration on a Strategic Partnership with Singapore</strong><sup id="fnref:104" role="doc-noteref"><a href="#fn:104" class="govuk-link" rel="footnote">[footnote 104]</a></sup>: which harnesses expertise in new technologies such as <abbr title="artificial intelligence">AI</abbr> from the UK and Singapore. <abbr title="Department for Science, Innovation and Technology">DSIT</abbr> also signed a Memorandum of Understanding (<abbr title="Memorandum of Understanding">MoU</abbr>) on Emerging Technologies in June 2023 with Singapore’s Infocomm Media Development Authority (<abbr title="Singapore’s Infocomm Media Development Authority">IMDA</abbr>). In this <abbr title="Memorandum of Understanding">MoU</abbr>, both parties agreed to collaborate on <abbr title="artificial intelligence">AI</abbr> governance and to facilitate the development of effective and interoperable <abbr title="artificial intelligence">AI</abbr> assurance mechanisms.</p> </li> </ul> <p>91. We have a number of other important bilateral relationships on <abbr title="artificial intelligence">AI</abbr> with countries across the world and we intend, where suitable, to further build such agreements to strengthen these partnerships, such as through bilateral MoUs and Free Trade Agreements.</p> <p>92. Only through effective global collaboration will the UK and our partners worldwide unlock the opportunities and mitigate the associated risks of <abbr title="artificial intelligence">AI</abbr>. We will continue to engage our international partners to support responsible <abbr title="artificial intelligence">AI</abbr> innovation that effectively and proportionately addresses potential <abbr title="artificial intelligence">AI</abbr> harms and aligns with the principles established in the <abbr title="artificial intelligence">AI</abbr> regulation white paper. We will also work together to promote coherence between our <abbr title="artificial intelligence">AI</abbr> governance frameworks to ensure that businesses can operate effectively in both the UK and wider global markets and to ensure that <abbr title="artificial intelligence">AI</abbr> developments benefit people around the world.</p> <h3 id="an-ai-regulation-roadmap-of-our-next-steps"> <span class="number">5.4. </span> An <abbr title="artificial intelligence">AI</abbr> regulation roadmap of our next steps</h3> <p>93. In 2024, we will:</p> <ul> <li> <p>Continue to develop our domestic policy position on <abbr title="artificial intelligence">AI</abbr> regulation by:</p> <ul> <li> <p>Engaging with a range of experts on interventions for highly capable <abbr title="artificial intelligence">AI</abbr> systems, including questions on open release, in the summer.</p> </li> <li> <p>Publishing an update on our work on new responsibilities for developers of highly capable general-purpose <abbr title="artificial intelligence">AI</abbr> systems by the end of the year.</p> </li> <li> <p>Collaborating across government and with regulators to analyse and review potential gaps in existing regulatory powers and remits on an ongoing basis.</p> </li> <li> <p>Working closely with the <abbr title="artificial intelligence">AI</abbr> Safety Institute, which will provide foundational insights to our central <abbr title="artificial intelligence">AI</abbr> risk assessment activities and inform our approach to <abbr title="artificial intelligence">AI</abbr> regulation, on an ongoing basis. The <abbr title="artificial intelligence">AI</abbr> Safety Institute will ensure that the UK takes an evidence-based, proportionate response to regulating the risks of <abbr title="artificial intelligence">AI</abbr>.</p> </li> </ul> </li> <li> <p>Progress action to promote <abbr title="artificial intelligence">AI</abbr> opportunities and tackle <abbr title="artificial intelligence">AI</abbr> risks by:</p> <ul> <li> <p>Conducting targeted engagement on our cross-economy <abbr title="artificial intelligence">AI</abbr> risk register and plan to assess the regulatory framework from the spring onwards.</p> </li> <li> <p>Releasing a call for views in spring to obtain further input on our next steps in securing <abbr title="artificial intelligence">AI</abbr> models, including a potential Code of Practice for cyber security of <abbr title="artificial intelligence">AI</abbr>, based on <abbr title="National Cyber Security Centre">NCSC</abbr>’s guidelines.</p> </li> <li> <p>Establishing a new international dialogue to defend democracy and address shared risks related to electoral interference ahead of the next <abbr title="artificial intelligence">AI</abbr> Safety Summit.</p> </li> <li> <p>Launching a call for evidence on <abbr title="artificial intelligence">AI</abbr>-related risks to trust in information and related issues such as deepfakes.</p> </li> <li> <p>Exploring mechanisms for providing greater transparency, including measures so that rights holders can better understand whether content they produce is used as an input into <abbr title="artificial intelligence">AI</abbr> models.</p> </li> <li> <p>Phasing in the mandatory requirement for central government departments to use the Algorithmic Transparency Recording Standard (<abbr title="Algorithmic Transparency Recording Standard">ATRS</abbr>) over the course of the year.</p> </li> </ul> </li> <li> <p>Build out the central function and support regulators by:</p> <ul> <li> <p>Launching a new £10 million programme to support regulators to identify and understand risks in their domain and to develop their skills and approaches to <abbr title="artificial intelligence">AI</abbr>.</p> </li> <li> <p>Establishing a steering committee to support and guide the activities of a formal regulator coordination structure within government in the spring.</p> </li> <li> <p>Asking key regulators to publish updates on their strategic approach to <abbr title="artificial intelligence">AI</abbr> by 30 April.</p> </li> <li> <p>Collaborating with regulators to iterate and expand our initial cross-sectoral guidance on implementing the principles, with further updates planned by summer.</p> </li> </ul> </li> <li> <p>Encourage effective <abbr title="artificial intelligence">AI</abbr> adoption and provide support for industry, innovators, and employees by:</p> <ul> <li> <p>Launching the pilot <abbr title="artificial intelligence">AI</abbr> and Digital Hub with the <abbr title="Digital Regulation Cooperation Forum">DRCF</abbr> in the spring.</p> </li> <li> <p>Publishing an Introduction to <abbr title="artificial intelligence">AI</abbr> Assurance in spring.</p> </li> <li> <p>Publishing updated guidance on the use of <abbr title="artificial intelligence">AI</abbr> within <abbr title="Human Resources">HR</abbr> and recruitment in spring.</p> </li> <li> <p>Publishing a full <abbr title="artificial intelligence">AI</abbr> skills framework that incorporates feedback to our consultation and supports employers, employees, and training providers to identify upskilling routes for <abbr title="artificial intelligence">AI</abbr> in spring.</p> </li> <li> <p>Launching the <abbr title="artificial intelligence">AI</abbr> Management Essentials scheme to set a minimum good practice standard for companies selling <abbr title="artificial intelligence">AI</abbr> products and services by the end of the year.</p> </li> <li> <p>Publishing an update on our emerging processes guide by the end of the year.</p> </li> </ul> </li> <li> <p>Support international collaboration on <abbr title="artificial intelligence">AI</abbr> governance by:</p> <ul> <li> <p>Actioning our newly announced £9 million partnership with the US on responsible <abbr title="artificial intelligence">AI</abbr> as part of the <abbr title="Department for Science, Innovation and Technology">DSIT</abbr> International Science Partnerships Fund.</p> </li> <li> <p>Publishing the first iteration of the International Report on the Science of <abbr title="artificial intelligence">AI</abbr> Safety in spring.</p> </li> <li> <p>Sharing new knowledge with international partners through the <abbr title="artificial intelligence">AI</abbr> Safety Institute on an ongoing basis.</p> </li> <li> <p>Supporting the Republic of Korea and France on the next <abbr title="artificial intelligence">AI</abbr> Safety Summits on an ongoing basis, and considering the possible role of <abbr title="artificial intelligence">AI</abbr> Safety Summits beyond these.</p> </li> <li> <p>Continuing bilateral and multilateral partnerships on <abbr title="artificial intelligence">AI</abbr>, including the <abbr title="Group of Seven">G7</abbr>, <abbr title="Group of 20">G20</abbr>, Council of Europe, <abbr title="Organisation for Economic Co-operation and Development">OECD</abbr>, United Nations, and <abbr title="Global Partnership on AI">GPAI</abbr>, on an ongoing basis.</p> </li> </ul> </li> </ul> <h2 id="summary-of-consultation-evidence-and-government-response"> <span class="number">6. </span> Summary of consultation evidence and government response</h2> <p>94. This chapter provides a summary of the written evidence we received in response to our consultation followed by the government response. This chapter is structured by the 10 categories that we used to group our 33 consultation questions:</p> <ul> <li>The revised cross-sectoral <abbr title="artificial intelligence">AI</abbr> principles.</li> <li>A statutory duty to regard.</li> <li>New central functions to support the framework.</li> <li>Monitoring and evaluation of the framework.</li> <li>Regulator capabilities.</li> <li>Tools for trustworthy <abbr title="artificial intelligence">AI</abbr>.</li> <li>Final thoughts.</li> <li>Legal responsibility for <abbr title="artificial intelligence">AI</abbr>.</li> <li>Foundation models and the regulatory framework.</li> <li> <abbr title="artificial intelligence">AI</abbr> sandboxes and testbeds.</li> </ul> <p>95. In total, we received 409 written consultation responses from organisations and individuals. Annex A provides an overview of who we received responses from and outlines our method of analysis. We also proactively engaged with 364 individuals through roundtables, technical workshops, bilaterals, and a programme of ongoing regulator engagement. While we weave insights from this engagement throughout our analysis, Annex A provides a detailed overview of our engagement findings.</p> <h3 id="the-revised-cross-sectoral-ai-principles"> <span class="number">6.1. </span> The revised cross-sectoral <abbr title="artificial intelligence">AI</abbr> principles</h3> <p><strong>1. Do you agree that requiring organisations to make it clear when they are using <abbr title="artificial intelligence">AI</abbr> would improve transparency?</strong></p> <p><strong>2. Are there other measures we could require of organisations to improve transparency for <abbr title="artificial intelligence">AI</abbr>?</strong></p> <p><strong>3. Do you agree that current routes to contest or get redress for <abbr title="artificial intelligence">AI</abbr>-related harms are adequate?</strong></p> <p><strong>4. How could current routes to contest or seek redress for <abbr title="artificial intelligence">AI</abbr>-related harms be improved, if at all?</strong></p> <p><strong>5. Do you agree that, when implemented effectively, the revised cross-sectoral principles will cover the risks posed by <abbr title="artificial intelligence">AI</abbr> technologies?</strong></p> <p><strong>6. What, if anything, is missing from the revised principles?</strong></p> <p>Summary of questions 1-6:</p> <p>96. Over half of respondents agreed that, when implemented effectively, the revised principles would cover the key risks posed by <abbr title="artificial intelligence">AI</abbr> technologies. The revised principles included safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. However, respondents also advocated for the explicit inclusion of human rights, operational resilience, data quality, international alignment, systemic risks and wider societal impacts, sustainability, and education and literacy.</p> <p>97. Respondents wanted to see further detail on the implementation of the principles, regulator capability, and interactions with existing law. Respondents consistently stressed the fast pace of technological change and reflected that the framework should be adaptable and supported by monitoring and evaluation. Some respondents were concerned that the principles would not be sufficiently enforceable, citing a lack of statutory backing.</p> <p>98. There was strong support for a range of transparency measures from respondents. Respondents emphasised that transparency was key to building public trust, accountability, and an effective and verifiable regulatory framework. A majority of respondents agreed that a requirement for organisations to make it clear when they are using <abbr title="artificial intelligence">AI</abbr> would improve transparency. Those who disagreed felt that labelling <abbr title="artificial intelligence">AI</abbr> use would be either insufficient or disproportionately burdensome. Respondents suggested a range of transparency measures including the public disclosure of inputs like compute and data; labelling <abbr title="artificial intelligence">AI</abbr> use and outputs; opt-ins and human alternatives to automated processing; explanations for <abbr title="artificial intelligence">AI</abbr> outcomes, impacts and limitations; public or organisational <abbr title="artificial intelligence">AI</abbr> registers; disclosure of model details to regulators; and independent assurance tools including audits and technical standards.</p> <p>99. Most respondents reported that current routes to contest or seek redress for <abbr title="artificial intelligence">AI</abbr>-related harms through existing legal frameworks are not adequate. Respondents noted that it can be difficult to identify <abbr title="artificial intelligence">AI</abbr>-related harms and the high costs of litigation often prevents individuals from seeking redress. Many respondents wanted to see the government clarify the legal rights and responsibilities relating to <abbr title="artificial intelligence">AI</abbr>, with many suggesting doing so through regulatory guidance. Some endorsed the introduction of statutory requirements. Respondents recommended establishing accessible redress routes, with some advocating for a central, cross-sector redress mechanism such as a dedicated <abbr title="artificial intelligence">AI</abbr> ombudsman. Respondents also noted that international agreements would be needed to ensure effective routes to contest or seek redress for <abbr title="artificial intelligence">AI</abbr>-related harms across borders. Respondents emphasised that better <abbr title="artificial intelligence">AI</abbr> transparency would help make redress more accessible across a broad range of potential harms, including intellectual property infringement.</p> <p>Response:</p> <p>100. The government wants to ensure that the UK maintains its position as a global leader in <abbr title="artificial intelligence">AI</abbr>. This means promoting safe, responsible innovation to ensure that we maximise the benefits <abbr title="artificial intelligence">AI</abbr> can bring across the country. Our cross-sectoral principles set out our expectations for the responsible design, development, and application of <abbr title="artificial intelligence">AI</abbr> to help guide businesses and organisations building and using these technologies. We are encouraged to see that most respondents agree that the revised cross-sectoral principles will cover the risks posed by <abbr title="artificial intelligence">AI</abbr> when implemented effectively.</p> <p>101. We expect regulators to apply the principles within their existing remits and in line with our existing laws and values, respecting the UK’s long history of democracy, strong rule of law, and commitments to human rights and environmental sustainability. As aspects of these values and rules are enshrined in the law that regulators are bound to follow, we do not think it is necessary to include democracy, human rights, the rule of law, or sustainability specifically within the principles themselves. The guidance we are publishing alongside this consultation response will support regulators to implement the principles within their respective domains.</p> <p>102. The principles already cover issues raised by respondents linked to both operational resilience (safety, security, and robustness) and data protection (transparency, fairness, and accountability). We expect all actors across the <abbr title="artificial intelligence">AI</abbr> life cycle to adhere to existing legal frameworks, including data protection law. The UK’s existing data protection legislation (UK <abbr title="UK General Data Protection Regulation">GDPR</abbr> and the Data Protection Act 2018) regulates the development of <abbr title="artificial intelligence">AI</abbr> systems and other technologies where personal data is involved. The Data Protection and Digital Information Bill will clarify the rights of data subjects to specific safeguards when subject to solely automated decisions that have significant effects on them. Furthermore, the Information Commissioner’s Office (<abbr title="Information Commissioner's Office">ICO</abbr>) has created specific guidance on how to use data for <abbr title="artificial intelligence">AI</abbr> in compliance with data protection law. Beyond the scope of data protection law<sup id="fnref:105" role="doc-noteref"><a href="#fn:105" class="govuk-link" rel="footnote">[footnote 105]</a></sup>. Beyond the scope of data protection law, the government is assessing a range of possible interventions aligned with the principles as part of our work to encourage the responsible and safe development of highly capable <abbr title="artificial intelligence">AI</abbr>. For example, we are exploring if and how to introduce targeted measures on developers of highly capable general-purpose <abbr title="artificial intelligence">AI</abbr> systems related to transparency requirements (for example, on training data), risk management, and accountability and corporate governance related obligations. Similarly, our central risk assessment activities will identify and monitor a range of risks, providing cross-economy oversight that will capture systemic risks and wider societal impacts.</p> <p>103. We acknowledge the broad support for transparency and we will continue our work assessing whether and which measures provide the most meaningful transparency for <abbr title="artificial intelligence">AI</abbr> end users and actors across the <abbr title="artificial intelligence">AI</abbr> life cycle. It is important that we take an evidence-based approach to transparency. The Algorithmic Transparency Recording Standard (<abbr title="Algorithmic Transparency Recording Standard">ATRS</abbr>) is a practical mechanism for transparency that was developed through public engagement and has been piloted across the UK<sup id="fnref:106" role="doc-noteref"><a href="#fn:106" class="govuk-link" rel="footnote">[footnote 106]</a></sup>. The <abbr title="Algorithmic Transparency Recording Standard">ATRS</abbr> helps public sector organisations provide clear information about algorithmic tools they use in decision-making. As mentioned in section 5.1, we will now be making use of the <abbr title="Algorithmic Transparency Recording Standard">ATRS</abbr> a requirement for all government departments and plan to expand this across the broader public sector over time. While measures like watermarking can help users identify <abbr title="artificial intelligence">AI</abbr> generated content, we need to ensure that proposed interventions are robust, cannot be easily overridden, and achieve positive outcomes. To establish greater transparency on <abbr title="artificial intelligence">AI</abbr> outputs, we published an “Emerging processes for frontier <abbr title="artificial intelligence">AI</abbr> safety” document that outlines three areas of practice related to identifying <abbr title="artificial intelligence">AI</abbr> generated content, including research techniques, watermarking, and <abbr title="artificial intelligence">AI</abbr> output databases<sup id="fnref:107" role="doc-noteref"><a href="#fn:107" class="govuk-link" rel="footnote">[footnote 107]</a></sup>. As mentioned in section 5.2.2, we will update this guide by the end of the year and continue to encourage <abbr title="artificial intelligence">AI</abbr> companies to develop best practices.</p> <p>104. Our expert regulators are already using their existing remits to implement the <abbr title="artificial intelligence">AI</abbr> principles, including the contestability and redress principle which includes expectations about clarifying existing routes to redress. We recognise the link between the fair and effective allocation of liability throughout the <abbr title="artificial intelligence">AI</abbr> life cycle and the availability and clarity of routes to redress. Our work to explore existing liability frameworks and accountability through the value chain is ongoing and includes analysis of the existence of redress mechanisms. As a first step towards ensuring fair and effective allocation of accountability and liability, the government is considering introducing targeted binding requirements on developers of highly capable general-purpose <abbr title="artificial intelligence">AI</abbr> systems which may involve creating or allocating new regulatory powers.</p> <h3 id="a-statutory-duty-to-regard"> <span class="number">6.2. </span> A statutory duty to regard</h3> <p><strong>7. Do you agree that introducing a statutory duty on regulators to have due regard to the principles would clarify and strengthen regulators’ mandates to implement our principles, while retaining a flexible approach to implementation?</strong></p> <p><strong>8. Is there an alternative statutory intervention that would be more effective?</strong></p> <p>Summary of questions 7-8:</p> <p>105. Most respondents somewhat or strongly agreed that introducing a statutory duty on regulators to have due regard to the principles set out in the <abbr title="artificial intelligence">AI</abbr> regulation white paper would clarify and strengthen regulators’ mandates to implement the principles while retaining a flexible approach to implementation. However, nearly a quarter noted that regulators would need enhanced resources and capabilities in order to enact a statutory duty effectively.</p> <p>106. Around a third of respondents argued that additional, targeted statutory measures would be necessary to effectively implement the regulatory framework. Many suggested expanding regulator powers, noting that the existing statutory remits of some regulators would limit their ability to implement the framework. In particular, respondents raised the need to review and potentially expand the investigatory powers and capabilities of regulators in regard to <abbr title="artificial intelligence">AI</abbr>.</p> <p>107. Some advocated for wider, horizontal statutory measures such as specific <abbr title="artificial intelligence">AI</abbr> legislation, a new <abbr title="artificial intelligence">AI</abbr> regulator, and strict rules about the use of <abbr title="artificial intelligence">AI</abbr> in certain contexts.</p> <p>108. Other respondents felt that, if rushed, the implementation of a duty to regard could disrupt regulation, innovation, and trust. These respondents recommended that the duty should be reviewed after a period of non-statutory implementation, particularly to observe interactions with existing law and regulatory remits. Some respondents 1. noted that the end goal and timeframes for the <abbr title="artificial intelligence">AI</abbr> regulatory framework were not clear, causing uncertainty.</p> <p>Response:</p> <p>109. We are encouraged that respondents to this question are enthusiastic about the proper and effective implementation of our cross-sectoral <abbr title="artificial intelligence">AI</abbr> principles. We welcome the broad support for a statutory duty on regulators, recognising that respondents also gave conditions and alternatives that could be used to implement the framework effectively. As set out in the <abbr title="artificial intelligence">AI</abbr> regulation white paper, we anticipate introducing a statutory duty on regulators requiring them to have due regard to the principles after reviewing an initial period of non-statutory implementation.</p> <p>110. We acknowledge concerns from respondents that rushing the implementation of a duty to regard could cause disruption to responsible <abbr title="artificial intelligence">AI</abbr> innovation. We will not rush to legislate but will evaluate whether it is necessary and effective to introduce a statutory duty to have due regard to the principles on regulators. We currently think that a non-statutory approach offers critical adaptability but we will keep this under review, for example by assessing the updates on strategic approaches to <abbr title="artificial intelligence">AI</abbr> that the government has asked a number of regulators to publish by 30 April 2024. We will also work with government departments and regulators to analyse and review potential gaps in existing regulatory powers and remits.</p> <p>111. We are pleased to see that many regulators are taking proactive steps to address <abbr title="artificial intelligence">AI</abbr> and implement the principles within their remits. This includes work by the Competition and Markets Authority (<abbr title="Competition and Markets Authority">CMA</abbr>), Advertising Standards Authority (ASA), and Office of Communications (<abbr title="Office of Communications">Ofcom</abbr>)<sup id="fnref:108" role="doc-noteref"><a href="#fn:108" class="govuk-link" rel="footnote">[footnote 108]</a></sup>. Others are progressing their existing plans in ways that align with these principles, such as the <abbr title="Information Commissioner's Office">ICO</abbr> and Medicines and Healthcare products Regulatory Agency (<abbr title="Medicines and Healthcare products Regulatory Agency">MHRA</abbr>)<sup id="fnref:109" role="doc-noteref"><a href="#fn:109" class="govuk-link" rel="footnote">[footnote 109]</a></sup>.</p> <p>112. We continue to work closely with regulators to develop the framework, ensure coherent implementation, and build regulator capability. To support a coherent approach across sectors, we are publishing initial guidance to regulators alongside this response on how to apply the cross-sectoral <abbr title="artificial intelligence">AI</abbr> principles within their existing remits. We will update this guidance over time to ensure that it reflects developments in our regime and technological advances in <abbr title="artificial intelligence">AI</abbr>. We will establish a steering committee by spring 2024 to support and guide the activity of the central regulator coordination function (see section 5.1.2 for details).</p> <p>113. We note respondents’ concerns across the consultation that any new rules for <abbr title="artificial intelligence">AI</abbr> should not contradict or duplicate existing laws. We will continue to evaluate any potential gaps or frictions within the existing statutory remits of regulators and current legislative frameworks. In the white paper, we said that we would keep the wider <abbr title="artificial intelligence">AI</abbr> landscape under review in order to inform future iterations of the regulatory framework, including whether further interventions on foundation models may be required. We will consult on our plan for monitoring and evaluating the regulatory framework in 2024 (see our response to questions on monitoring and evaluation in section 6.4 for more detail).</p> <h3 id="new-central-functions-to-support-the-framework"> <span class="number">6.3. </span> New central functions to support the framework</h3> <p><strong>9. Do you agree that the functions outlined in section 3.3.1 would benefit our <abbr title="artificial intelligence">AI</abbr> regulation framework if delivered centrally?</strong></p> <p><strong>10. What, if anything, is missing from the central functions?</strong></p> <p><strong>11. Do you know of any existing organisations who should deliver one or more of our proposed central functions?</strong></p> <p><strong>12. Are there additional activities that would help businesses confidently innovate and use <abbr title="artificial intelligence">AI</abbr> technologies?</strong></p> <p><strong>12.1. If so, should these activities be delivered by government, regulators, or a different organisation?</strong></p> <p><strong>13. Are there additional activities that would help individuals and consumers confidently use <abbr title="artificial intelligence">AI</abbr> technologies?</strong></p> <p><strong>13.1. If so, should these activities be delivered by government, regulators, or a different organisation?</strong></p> <p><strong>14. How can we avoid overlapping, duplicative, or contradictory guidance on <abbr title="artificial intelligence">AI</abbr> issued by different regulators?</strong></p> <p>Summary of questions 9-14:</p> <p>114. Nearly all respondents agreed that delivering the proposed functions centrally would benefit the <abbr title="artificial intelligence">AI</abbr> regulation framework, with many praising the approach for ensuring that the government can monitor and iterate the framework.</p> <p>115. While respondents widely supported the proposed central functions, many wanted more detail on each function and its activities. Some respondents felt there should be a greater emphasis on partnerships and collaboration to deliver the activities. Respondents also wanted more detail on international collaboration. Some suggested that the government should prioritise building the central risk function. Of these responses, a few noted that more consideration should be given to ethical and societal risks.</p> <p>116. Respondents emphasised that the regulatory functions should build from the existing strengths of the UK’s regulatory landscape, with approximately a third identifying regulators as organisations who should deliver one or more central functions. Overall, respondents emphasised that effective delivery would require collaboration between government, regulators, industry, civil society, academia, and the general public. Over a quarter of respondents felt that technology-focused research institutes and think tanks could help deliver the central functions.</p> <p>117. Respondents suggested a range of additional activities that government and regulators could offer to support industry. Around a third of respondents felt that training products and educational resources would help organisations to apply the principles to everyday business practices. Nearly a quarter suggested that regulators should produce guidance to allow businesses to innovate confidently. Some noted the importance of internationally interoperable frameworks for <abbr title="artificial intelligence">AI</abbr> regulation to ensure a low compliance burden on organisations building, selling, and using <abbr title="artificial intelligence">AI</abbr> technologies. Respondents also argued that more work is needed to ensure that businesses have access to high-quality, diverse, and ethically-sourced data to support their <abbr title="artificial intelligence">AI</abbr> innovation efforts.</p> <p>118. When thinking about additional activities for individuals and consumers, respondents prioritised transparency from the cross-sectoral principles, with nearly half arguing that individuals and consumers should be able to identify when and how <abbr title="artificial intelligence">AI</abbr> is being used by a service or organisation. More than a third of respondents felt that education and training would enable consumers to use <abbr title="artificial intelligence">AI</abbr> products and services safely and more effectively.</p> <p>119. Around a third suggested that the proposed central functions would be the most effective mechanism to avoid overlapping, duplicative, or contradictory guidance.</p> <p>Response:</p> <p>120. We welcome the strong support for the central functions proposed in the <abbr title="artificial intelligence">AI</abbr> regulation white paper to coordinate, monitor, and adapt the <abbr title="artificial intelligence">AI</abbr> framework. Together, these functions will provide clarity, ensure the framework works as intended, and future-proof the UK’s regulatory approach. That is why we have already started to establish the central function within government to undertake the activities proposed in the white paper (see section 5.1.2 for details).</p> <p>121. We note respondents’ concerns around the potential risks posed by the rapid developments in <abbr title="artificial intelligence">AI</abbr> technology. We have already established the risk monitoring and assessment activities of the central function within <abbr title="Department for Science, Innovation and Technology">DSIT</abbr>, reflecting the strong recommendation from respondents to operationalise cross-economy <abbr title="artificial intelligence">AI</abbr> risk management as a priority. Our centralised risk assessment activities will identify, measure, and monitor existing and emerging <abbr title="artificial intelligence">AI</abbr> risks using expertise from across government, industry, and academia, including the <abbr title="artificial intelligence">AI</abbr> Safety Institute. This will allow us to monitor risks holistically and identify any potential gaps in our approach. Horizon scanning will extend our central risk assessment activities, monitoring emerging <abbr title="artificial intelligence">AI</abbr> trends and opportunities to maximise benefits while taking a proportionate approach to <abbr title="artificial intelligence">AI</abbr> risks. This year, we will conduct targeted engagement on our cross-economy <abbr title="artificial intelligence">AI</abbr> risk register.</p> <p>122. Reflecting respondents’ views that the proposed central function will help regulators avoid producing overlapping, duplicative, or contradictory guidance, we are developing a coordination function to support regulators to interpret and apply the principles within their remits (see section 5.1.2 for detail). As part of this, we will establish a steering committee in the spring with government representatives and key regulators to support knowledge exchange and coordination on <abbr title="artificial intelligence">AI</abbr> governance. To further support regulators and ensure that the UK’s strength in <abbr title="artificial intelligence">AI</abbr> research is fully utilised in our regulatory framework, we have also announced a £10 million package to support regulator <abbr title="artificial intelligence">AI</abbr> capabilities and a new commitment by UK Research and Innovation (<abbr title="UK Research and Innovation">UKRI</abbr>) to improve links between regulators and the skills, expertise, and activities supported by their future investments in <abbr title="artificial intelligence">AI</abbr> research.</p> <p>123. To ensure appropriate levels of cohesion with emerging approaches to <abbr title="artificial intelligence">AI</abbr> regulation in other jurisdictions, we will continue to work with international partners on regulatory interoperability, including technical standards and assurance techniques, to make it easier for UK companies to attract overseas investment and trade internationally. For more detail, see section 5.3 and our response to questions on tools for trustworthy <abbr title="artificial intelligence">AI</abbr> in 6.6.</p> <p>124. Alongside this, we have announced a new pilot regulatory service to be hosted by the Digital Regulation Cooperation Forum (<abbr title="Digital Regulation Cooperation Forum">DRCF</abbr>) to make it easier for <abbr title="artificial intelligence">AI</abbr> and digital innovators to navigate the regulatory landscape (see our response to questions on <abbr title="artificial intelligence">AI</abbr> sandboxes for more detail: section 6.10).</p> <p>125. We remain committed to the iterative approach set out in the white paper, anticipating that our framework will need to evolve as new risks or regulatory gaps emerge. Our monitoring and evaluation activities will assess if, when, and how we make changes to our framework, gathering evidence from a wide range of sources. We provide more detail in our response to questions on monitoring and evaluation in section 6.4.</p> <p>126. We are encouraged that respondents endorsed a wide range of organisations in the UK as useful partners to deliver the proposed centralised activities. As we said in the white paper, the government will deliver the central function initially, working in partnership with regulators and other key actors in the <abbr title="artificial intelligence">AI</abbr> ecosystem. The government’s primary role will be to leverage existing activities where possible and ensure that all the necessary activities to promote responsible <abbr title="artificial intelligence">AI</abbr> innovation are taking place.</p> <h3 id="monitoring-and-evaluation-of-the-framework"> <span class="number">6.4. </span> Monitoring and evaluation of the framework</h3> <p><strong>15. Do you agree with our overall approach to monitoring and evaluation?</strong></p> <p><strong>16. What is the best way to measure the impact of our framework?</strong></p> <p><strong>17. Do you agree that our approach strikes the right balance between supporting <abbr title="artificial intelligence">AI</abbr> innovation; addressing known, prioritised risks; and future-proofing the <abbr title="artificial intelligence">AI</abbr> regulation framework?</strong></p> <p>Summary of questions 15-17:</p> <p>127. A majority of respondents agreed with the overall approach to monitoring and evaluation, commending the proposed feedback loop with industry and civil society as a means to gain insights about the effectiveness of the framework.</p> <p>128. Just over a quarter of respondents emphasised that engaging with a diverse range of stakeholders would create the most valuable insights. Many advocated for the inclusion of wider civil society and consumer representatives to ensure that voices outside of the tech industry are heard, as well as regular engagement with industry and research experts. Respondents also stressed that international engagement would be key to effectively harmonise approaches across jurisdictions.</p> <p>129. Respondents wanted to see more detail on the practicalities of the monitoring and evaluation framework, including how data will be collected and used to measure success. Nearly a third of respondents suggested the impact of the framework should be measured through a range of data sources, and recommended collecting data on key indicators as well as using impact assessments.</p> <p>130. Half of respondents agreed that the approach appears to strike the right balance between supporting <abbr title="artificial intelligence">AI</abbr> innovation; addressing known, prioritised risks; and future-proofing the <abbr title="artificial intelligence">AI</abbr> regulation framework. However, some respondents disagreed and argued that the approach prioritised <abbr title="artificial intelligence">AI</abbr> innovation and economic growth over safety and the mitigation of <abbr title="artificial intelligence">AI</abbr>-related risks.</p> <p>Response:</p> <p>131. We are pleased to note the positive feedback on our proposed approach to the monitoring and evaluation of the framework. Monitoring and evaluation activities will allow us to review the implementation of the <abbr title="artificial intelligence">AI</abbr> regulation framework across the economy and is at the heart of our iterative approach. It will ensure that the regime is working as intended: actively responding to prioritised risks, supporting innovation, and maximising the benefits of <abbr title="artificial intelligence">AI</abbr> across the UK. We agree with respondents that, as we implement the framework set out in the <abbr title="artificial intelligence">AI</abbr> regulation white paper, monitoring and evaluation will allow the government to spot potential issues and adapt the framework in response if needed.</p> <p>132. We acknowledge growing concerns that we may face more safety risks related to <abbr title="artificial intelligence">AI</abbr> as these technologies are increasingly used. We recognise that many of these concerns focus on the advanced capabilities of the most powerful <abbr title="artificial intelligence">AI</abbr> systems. That is why we remain committed to an adaptable approach that will evolve as new risks or regulatory gaps emerge. Our initial thinking on potential new measures targeted at the developers of highly capable general-purpose <abbr title="artificial intelligence">AI</abbr> models is presented in section 5.2. The <abbr title="artificial intelligence">AI</abbr> Safety Institute will advance <abbr title="artificial intelligence">AI</abbr> safety capabilities for the public interest, allowing the government to respond to the cutting-edge of technological development. Our monitoring and evaluation will build on work by the Institute, our cross-sectoral risk assessment, and feedback from stakeholders to understand how the regulatory framework is performing. Our evaluation will consider whether the framework is effectively achieving the objectives set out in the white paper, including building public trust by addressing potential risks appropriately.</p> <p>133. We note the emphasis from respondents on using the right data, metrics, and sources to evaluate how well the regulatory framework is performing. We agree that it is key to the effectiveness of the framework to get the measures of success right, and we are actively working on this as we develop our monitoring and evaluation framework for publication. We will conduct a targeted consultation on our proposed plan to assess the framework with a range of stakeholders in spring. As part of this, we will seek detailed views on our proposed metrics and data sources.</p> <h3 id="regulator-capabilities"> <span class="number">6.5. </span> Regulator capabilities</h3> <p><strong>18. Do you agree that regulators are best placed to apply the principles and government is best placed to provide oversight and deliver central functions?</strong></p> <p><strong>19. As a regulator, what support would you need in order to apply the principles in a proportionate and pro-innovation way?</strong></p> <p><strong>20. Do you agree that a pooled team of <abbr title="artificial intelligence">AI</abbr> experts would be the most effective way to address capability gaps and help regulators apply the principles?</strong></p> <p>Summary of questions 18-20:</p> <p>134. Nearly all respondents agreed that regulators are best placed to lead the implementation of the principles, and that the government is best placed to provide oversight and delivery of the central functions. However, respondents argued that the government would need to improve regulator capability in order for this approach to be effective. Some respondents were concerned at the lack of a specific body to support the implementation and oversight of the proposed framework, with some asking for <abbr title="artificial intelligence">AI</abbr> legislation and a new <abbr title="artificial intelligence">AI</abbr> regulator.</p> <p>135. While regulators are broadly supportive of the proposed approach, over a quarter of those that responded to Q19 suggested that increased <abbr title="artificial intelligence">AI</abbr> expertise would help them effectively apply the principles within their existing remits. Overall, regulators reported different levels of technical expertise and <abbr title="artificial intelligence">AI</abbr> capability. Some felt that greater organisational capacity and additional resources would help them undertake new responsibilities related to <abbr title="artificial intelligence">AI</abbr> and understand where and how <abbr title="artificial intelligence">AI</abbr> is used in their domains.</p> <p>136. Regulators also noted that <abbr title="artificial intelligence">AI</abbr> presents coordination challenges across domains and sectors, with some emerging risks related to <abbr title="artificial intelligence">AI</abbr> not falling clearly within a specific existing remit. Just over a quarter of regulators that responded to Q19 emphasised that close collaboration between regulators and the proposed central functions would help build meaningful sector-specific requirements and prevent duplication.</p> <p>137. A majority of respondents agreed that a pooled team of <abbr title="artificial intelligence">AI</abbr> experts would be the most effective way to address the different levels of capability across the regulatory landscape. Respondents advocated for a diverse and multi-disciplinary pool to bring together technical <abbr title="artificial intelligence">AI</abbr> expertise with sector-specific regulatory knowledge, industry specialists, and civil society. Respondents argued that this would ensure that regulators are considering a broad range of perspectives in their application of the cross-sectoral <abbr title="artificial intelligence">AI</abbr> principles.</p> <p>Response:</p> <p>138. We are encouraged that respondents broadly agree with the proposed regulator-led approach for the implementation of the principles, with the government providing oversight and delivering the central function. As outlined in the <abbr title="artificial intelligence">AI</abbr> regulation white paper, our existing expert regulators are best placed to conduct detailed risk analysis and enforcement activities within their areas of expertise. We will continue to work closely with regulators to ensure that potential risks posed by <abbr title="artificial intelligence">AI</abbr> are sufficiently covered by our rule of law. In keeping with our iterative approach, we will seek to adapt the framework, including the regulatory architecture, if analysis proves this is necessary and effective.</p> <p>139. As pointed out by respondents across the consultation, to regulate <abbr title="artificial intelligence">AI</abbr> effectively our regulators must have the right skills, tools, and expertise. To support regulator’s ability to adapt and respond to the risks and opportunities that <abbr title="artificial intelligence">AI</abbr> presents in their domains, we are today announcing a £10 million investment to build technical upskilling. We will work closely with regulators to identify the most promising opportunities to leverage this funding, including designing a delivery model that can achieve the intended objectives more effectively than the central pool of expertise proposed in the <abbr title="artificial intelligence">AI</abbr> regulation white paper. In particular, regulator feedback has shown that we need to support them to develop tools and skills within their specific domains – albeit working collaboratively where appropriate – and deliver support that aligns with and supports their independence. As capability and resource varies across regulators, our intention is that this fund will particularly enable those regulators with less mature <abbr title="artificial intelligence">AI</abbr> expertise to conduct research and uncover foundational insights to develop or adapt practical tools to ensure compliance in an <abbr title="artificial intelligence">AI</abbr>-enabled future.</p> <p>140. Further, as set out in the response to Professor Dame Angela McLean’s cross-cutting review of pro-innovation regulation of technologies<sup id="fnref:110" role="doc-noteref"><a href="#fn:110" class="govuk-link" rel="footnote">[footnote 110]</a></sup>, the government is also exploring how to further support regulators to develop the specialist skills necessary to regulate emerging technologies, including increased flexibility on pay and conditions. This builds on schemes already in place to support secondments between government departments, regulators, academia, and industry.</p> <p>141. We acknowledge regulator’s concerns that <abbr title="artificial intelligence">AI</abbr> can pose coordination challenges. In the white paper we proposed a number of centralised activities to support regulators and ensure that the regulatory landscape for <abbr title="artificial intelligence">AI</abbr> is consistent and cohesive. To facilitate cross-cutting collaboration and ensure that the overall regulatory framework functions as intended, we are developing our regulatory coordination activities. These coordination activities will sit in our central function in government alongside our <abbr title="artificial intelligence">AI</abbr> risk assessment activities (see more detail in section 5.1.2). To support a coherent approach across sectors, we are also publishing initial guidance to regulators alongside this response on how to apply the cross-sectoral <abbr title="artificial intelligence">AI</abbr> principles within their existing remits.</p> <p>142. We note respondents’ emphasis on transparency and the need for industry and civil society to have visibility of the <abbr title="artificial intelligence">AI</abbr> regulation framework. We agree that establishing feedback loops with industry, academia and civil society will be key to measuring the effectiveness of the framework. Our central function will engage stakeholders to ensure that a wide range of voices are heard and considered: providing clarity, building trust, ensuring interoperability, and informing the government of the need to adapt the framework.</p> <h3 id="tools-for-trustworthy-ai"> <span class="number">6.6. </span> Tools for trustworthy <abbr title="artificial intelligence">AI</abbr> </h3> <p><strong>21. Which non-regulatory tools for trustworthy <abbr title="artificial intelligence">AI</abbr> would most help organisations to embed the <abbr title="artificial intelligence">AI</abbr> regulation principles into existing business processes?</strong></p> <p>Summary of question 21:</p> <p>143. There was strong support for the use of technical standards and assurance techniques, with some respondents agreeing that both would help organisations to embed the <abbr title="artificial intelligence">AI</abbr> principles into existing business processes. Many respondents praised the UK <abbr title="artificial intelligence">AI</abbr> Standards Hub and the Centre for Data Ethics and Innovation’s (<abbr title="Centre for Data Ethics and Innovation">CDEI</abbr>) work on <abbr title="artificial intelligence">AI</abbr> assurance. While some respondents noted that businesses would have a smaller compliance burden if tools and processes were consistent across sectors, others noted the importance of additional sector-specific tools and processes. Respondents also suggested supplementing technical standards with case studies and examples of good practice.</p> <p>144. Respondents argued that standardised tools and techniques for identifying and mitigating potential risks related to <abbr title="artificial intelligence">AI</abbr> would also support organisations to embed the <abbr title="artificial intelligence">AI</abbr> principles. Some identified assurance techniques such as impact and risk assessments, model performance monitoring, model uncertainty evaluations, and red teaming as particularly helpful for identifying <abbr title="artificial intelligence">AI</abbr> risks. A few respondents recommended assurance techniques that can be used to detect and prevent issues such as drift to mitigate risks related to data. While commending the role of tools for trustworthy <abbr title="artificial intelligence">AI</abbr>, a small number of respondents also expressed a desire for more stringent regulatory measures, such as statutory requirements for high risk applications of <abbr title="artificial intelligence">AI</abbr> or a watchdog for foundation models.</p> <p>145. Respondents felt that tools and techniques such as fairness metrics, transparency reports, and organisational <abbr title="artificial intelligence">AI</abbr> ethics guidelines can support the responsible use of <abbr title="artificial intelligence">AI</abbr> while growing public trust in the technology. Respondents expressed the desire for third-party verification of <abbr title="artificial intelligence">AI</abbr> models through bias audits, consumer labelling schemes, and external certification against technical standards.</p> <p>146. A few respondents noted the benefits of international harmonisation across <abbr title="artificial intelligence">AI</abbr> governance approaches for both organisations and consumers. Some endorsed interoperable technical standards for <abbr title="artificial intelligence">AI</abbr>, commending global standards development organisations (<abbr title="Standards Development Organisations">SDOs</abbr>) such as the International Organization for Standardization (<abbr title="International Organization for Standardization">ISO</abbr>) and Institute of Electrical and Electronics Engineers (<abbr title="Institute of Electrical and Electronics Engineers">IEEE</abbr>). Others noted the strength of a range of international work on <abbr title="artificial intelligence">AI</abbr> including that by individual countries, such as the USA’s National Institute of Standards and Technology (<abbr title="National Institute of Standards and Technology">NIST</abbr>) <abbr title="artificial intelligence">AI</abbr> Risk Management Framework (<abbr title="Risk Management Framework">RMF</abbr>) and Singapore’s <abbr title="artificial intelligence">AI</abbr> Verify Foundation, along with work on international governance by multilateral bodies such as the Organisation for Economic Co-operation and Development (<abbr title="Organisation for Economic Co-operation and Development">OECD</abbr>), United Nations (<abbr title="United Nations">UN</abbr>), and <abbr title="Group of Seven">G7</abbr>.</p> <p>Response:</p> <p>147. We are pleased to see such strong support for the continued development and adoption of technical standards and assurance techniques for <abbr title="artificial intelligence">AI</abbr>. These tools will help organisations put our proposed regulatory principles into practice, innovate responsibly, and build public confidence. We recognise that, in some instances, it will be important to have assurance techniques and technical standards that are specific to a particular context, application, or sector. That is why, in the <abbr title="artificial intelligence">AI</abbr> regulation white paper, we set out a layered approach to technical standards, encouraging regulators to build on widely applicable sector-agnostic tools where appropriate<sup id="fnref:111" role="doc-noteref"><a href="#fn:111" class="govuk-link" rel="footnote">[footnote 111]</a></sup>.</p> <p>148. We welcome praise for the UK <abbr title="artificial intelligence">AI</abbr> Standards Hub and <abbr title="Centre for Data Ethics and Innovation">CDEI</abbr>. Launched in October 2022, the Hub brings together the UK’s technical expertise on <abbr title="artificial intelligence">AI</abbr> standards, including the Alan Turing Institute, British Standards Institution, and National Physical Laboratory, to provide training and information on the complex international <abbr title="artificial intelligence">AI</abbr> standards landscape. The <abbr title="Centre for Data Ethics and Innovation">CDEI</abbr> published a Portfolio of <abbr title="artificial intelligence">AI</abbr> Assurance Techniques in June 2023 with examples from the real world to support the development of trustworthy <abbr title="artificial intelligence">AI</abbr>, which respondents indicated would be helpful<sup id="fnref:112" role="doc-noteref"><a href="#fn:112" class="govuk-link" rel="footnote">[footnote 112]</a></sup>. The Portfolio is also part of the <abbr title="Organisation for Economic Co-operation and Development">OECD</abbr>’s Catalogue of Tools and Metrics for Trustworthy <abbr title="artificial intelligence">AI</abbr>, which shares the <abbr title="Centre for Data Ethics and Innovation">CDEI</abbr> case-studies to an international audience. The <abbr title="Centre for Data Ethics and Innovation">CDEI</abbr> also launched the “Fairness Innovation Challenge” in October to support the development of new socio-technical solutions to address bias and discrimination in <abbr title="artificial intelligence">AI</abbr> systems<sup id="fnref:113" role="doc-noteref"><a href="#fn:113" class="govuk-link" rel="footnote">[footnote 113]</a></sup>. Today we are announcing that the Centre for Data Ethics and Innovation (<abbr title="Centre for Data Ethics and Innovation">CDEI</abbr>) is changing its name to the Responsible Technology Adoption Unit to more accurately reflect its role within the Department for Science, Innovation and Technology (<abbr title="Department for Science, Innovation and Technology">DSIT</abbr>) to develop tools and techniques that enable responsible adoption of <abbr title="artificial intelligence">AI</abbr> in the private and public sectors. This year, <abbr title="Department for Science, Innovation and Technology">DSIT</abbr> will publish an “Introduction to <abbr title="artificial intelligence">AI</abbr> assurance” to further promote the value of <abbr title="artificial intelligence">AI</abbr> assurance.</p> <p>149. We note that respondents would like to see more standardised tools and techniques to identify and manage <abbr title="artificial intelligence">AI</abbr> risk. Ahead of the <abbr title="artificial intelligence">AI</abbr> Safety Summit in November 2023, we published “Emerging processes for frontier <abbr title="artificial intelligence">AI</abbr> safety” to help prompt a debate about good safety processes for advanced <abbr title="artificial intelligence">AI</abbr> systems look like<sup id="fnref:114" role="doc-noteref"><a href="#fn:114" class="govuk-link" rel="footnote">[footnote 114]</a></sup>. The document provides a snapshot of promising ideas, emerging processes, and associated practices in <abbr title="artificial intelligence">AI</abbr> safety. It is intended as a point of reference to inform the development of frontier <abbr title="artificial intelligence">AI</abbr> organisations’ safety policies as well as a companion for readers of these policies. It outlines early thinking on practices for innovation in frontier <abbr title="artificial intelligence">AI</abbr> development, including model evaluations and red teaming, responsible capability scaling, and model reporting and information sharing. In 2024, we will encourage <abbr title="artificial intelligence">AI</abbr> companies to develop their <abbr title="artificial intelligence">AI</abbr> safety and responsible capability scaling policies. As part of this work, we will update our emerging processes guide by the end of the year. More widely, we note the development of relevant global technical standards which provide guidance on risk management related to <abbr title="artificial intelligence">AI</abbr>. For example, standard <abbr title="International Organization for Standardization">ISO</abbr> 42001 will help organisations manage their <abbr title="artificial intelligence">AI</abbr> systems in a trustworthy way.</p> <p>150. In the white paper, we note that responding to risk and building public trust are key drivers for regulation. We therefore understand respondents’ emphasis on tools for building public trust as a key way to ensure responsible <abbr title="artificial intelligence">AI</abbr> innovation. The Responsible Technology Adoption Unit (formerly <abbr title="Centre for Data Ethics and Innovation">CDEI</abbr>) within <abbr title="Department for Science, Innovation and Technology">DSIT</abbr> has a specialist Public Insights team that regularly engages with the general public and affected communities to build a deep understanding of public attitudes towards <abbr title="artificial intelligence">AI</abbr><sup id="fnref:115" role="doc-noteref"><a href="#fn:115" class="govuk-link" rel="footnote">[footnote 115]</a></sup>. These insights are used by <abbr title="Department for Science, Innovation and Technology">DSIT</abbr> and wider government to align our regulatory approaches to <abbr title="artificial intelligence">AI</abbr> with public values and foster trust in these technologies. <abbr title="Department for Science, Innovation and Technology">DSIT</abbr> and the Central Digital and Data Office (<abbr title="Central Digital and Data Office">CDDO</abbr>) have also developed the <abbr title="Algorithmic Transparency Recording Standard">ATRS</abbr> to help public sector organisations provide clear information about algorithmic tools they use to support decisions<sup id="fnref:116" role="doc-noteref"><a href="#fn:116" class="govuk-link" rel="footnote">[footnote 116]</a></sup>. Following a successful pilot of the standard, and publication of an approved cross-government version last year, we will now be making use of the <abbr title="Algorithmic Transparency Recording Standard">ATRS</abbr> a requirement for all government departments and plan to expand this across the broader public sector over time.</p> <p>151. We agree with respondents that international cooperation on <abbr title="artificial intelligence">AI</abbr> governance will be key to successfully mitigating <abbr title="artificial intelligence">AI</abbr>-related risks and building public trust in <abbr title="artificial intelligence">AI</abbr>. The first ever <abbr title="artificial intelligence">AI</abbr> Safety Summit convened a group of representatives from around the globe to set a new path for collective international action to navigate the opportunities and risks of frontier <abbr title="artificial intelligence">AI</abbr>. We also continue to collaborate internationally on <abbr title="artificial intelligence">AI</abbr> governance, both bilaterally and through several multilateral fora. For example, the UK plays an important role in <abbr title="artificial intelligence">AI</abbr> discussions at the <abbr title="United Nations">UN</abbr>, Council of Europe, <abbr title="Organisation for Economic Co-operation and Development">OECD</abbr>, <abbr title="Group of Seven">G7</abbr>, Global Partnership on <abbr title="artificial intelligence">AI</abbr> (<abbr title="Global Partnership on AI">GPAI</abbr>), and <abbr title="Group of 20">G20</abbr>. Notably, the UK worked closely with <abbr title="Group of Seven">G7</abbr> partners in negotiating the Codes of Conduct and Guiding Principles for the development of advanced <abbr title="artificial intelligence">AI</abbr> systems, as part of the Hiroshima <abbr title="artificial intelligence">AI</abbr> Process. The UK fully supports developing <abbr title="artificial intelligence">AI</abbr> policy and technical standards in a globally inclusive, multi-stakeholder, open, and consensus-based way. We support UK stakeholders to participate in Standards Development Organisations (<abbr title="Standards Development Organisations">SDOs</abbr>) to both leverage the benefits of global technical standards here in the UK and deliver global digital technical standards shaped by democratic values.</p> <h3 id="final-thoughts"> <span class="number">6.7. </span> Final thoughts</h3> <p><strong>22. Do you have any other thoughts on our overall approach? Please include any missed opportunities, flaws, and gaps in our framework.</strong></p> <p>Summary of question 22:</p> <p>152. Some respondents felt that the <abbr title="artificial intelligence">AI</abbr> regulation framework set out in the white paper would benefit from more detailed guidance on <abbr title="artificial intelligence">AI</abbr>-related risks. Some wanted to see more stringent measures for severe risks, particularly related to the use of <abbr title="artificial intelligence">AI</abbr> in safety-critical contexts. Respondents suggested that the framework would be clearer if the government provided risk categories for certain uses of <abbr title="artificial intelligence">AI</abbr> such as law enforcement and places of work. Other respondents stressed that <abbr title="artificial intelligence">AI</abbr> can pose or accelerate significant risks related to privacy and data protection breaches, cyberattacks, electoral interference, misinformation, human rights infringements, environmental sustainability, and competition issues. A few respondents were concerned about the potential existential risk posed by <abbr title="artificial intelligence">AI</abbr>. Many respondents felt that <abbr title="artificial intelligence">AI</abbr> technologies are developing faster than regulatory processes.</p> <p>153. Some respondents argued that the success of the framework relies on sufficient coordination between regulators in order to provide a clear and consistent approach to <abbr title="artificial intelligence">AI</abbr> across sectors and markets. Respondents also noted that different sectors face particular <abbr title="artificial intelligence">AI</abbr>-related benefits and risks, suggesting that the framework would need to balance the consistency provided by cross-sector requirements with the accuracy of sector-specific approaches. In particular, respondents flagged that any new rules or bodies to regulate <abbr title="artificial intelligence">AI</abbr> should build from the existing statutory remits of regulators and relevant regulatory standards. Respondents also noted that regulators would need to be adequately resourced with technical expertise and skills to implement the framework effectively.</p> <p>154. Respondents consistently emphasised the importance of international harmonisation to effective <abbr title="artificial intelligence">AI</abbr> regulation. Some respondents suggested that the UK should work towards an internationally aligned regulatory ecosystem for <abbr title="artificial intelligence">AI</abbr> by developing a gold standard framework and promoting best practice through key multilateral channels such as the <abbr title="Organisation for Economic Co-operation and Development">OECD</abbr>, <abbr title="United Nations">UN</abbr>, <abbr title="Global Partnership on AI">GPAI</abbr>, <abbr title="Group of Seven">G7</abbr>, <abbr title="Group of 20">G20</abbr>, and the Council of Europe. Respondents noted that divergent or overlapping approaches to regulating <abbr title="artificial intelligence">AI</abbr> would cause significant compliance burdens. Respondents argued that international cooperation can support responsible <abbr title="artificial intelligence">AI</abbr> innovation in the UK by creating clear and certain rules that allow investments to move across multiple markets. Respondents also suggested establishing bilateral working groups with key strategic partners to share expertise. Some respondents stressed that the UK’s pro-innovation approach should be delivered at pace to remain competitive with a fast-moving international landscape.</p> <p>Response:</p> <p>155. We acknowledge that many respondents would like more detail on the implementation of the framework set out in the <abbr title="artificial intelligence">AI</abbr> regulation white paper, particularly regarding <abbr title="artificial intelligence">AI</abbr>-related risks. We have already started to deliver the proposals set out in the white paper, working quickly to establish centralised, cross-economy risk assessment activities within the government to identify, measure, and mitigate risks. Building from this work, we published research on frontier <abbr title="artificial intelligence">AI</abbr> capabilities and risks for discussion at the <abbr title="artificial intelligence">AI</abbr> Safety Summit<sup id="fnref:117" role="doc-noteref"><a href="#fn:117" class="govuk-link" rel="footnote">[footnote 117]</a></sup>. It outlined initial evidence on the most advanced <abbr title="artificial intelligence">AI</abbr> systems and how their capabilities and risks may continue to develop. The significant uncertainty in the evidence highlights the need for further research.</p> <p>156. This year, we will consult on a cross-economy risk register for <abbr title="artificial intelligence">AI</abbr>, seeking expert views on our risk assessment methodology and whether we have comprehensively captured <abbr title="artificial intelligence">AI</abbr>-related risks. The <abbr title="artificial intelligence">AI</abbr> Safety Institute will advance the world’s knowledge of <abbr title="artificial intelligence">AI</abbr> safety by carefully examining, evaluating, and testing advanced <abbr title="artificial intelligence">AI</abbr> systems. It will conduct fundamental research on how to keep people safe in the face of fast and unpredictable technological progress.</p> <p>157. In the white paper, we proposed an adaptable, principles-based approach to regulating <abbr title="artificial intelligence">AI</abbr> in order to keep pace with rapid technological change. We will use our risk assessment and monitoring and evaluation activities to continue to assess measures for the targeted, proportionate, and effective prevention and mitigation of any new and accelerated risks related to <abbr title="artificial intelligence">AI</abbr>, including those potentially posed by the development of the most powerful systems.</p> <p>158. We agree that an effective framework for regulating <abbr title="artificial intelligence">AI</abbr> will need to carefully balance cross-sector consistency with sector specific needs in order to support responsible innovation. Our context-focused framework builds from the domain expertise of the UK’s regulators, ensuring that different industries benefit from existing regulatory knowledge. While this approach streamlines compliance within specific sectors, we recognise the need for consistency and coordination between regulators to create an easily navigable regulatory landscape for businesses and consumers. That is why, as we note in detail in our responses to questions on regulator capability and <abbr title="artificial intelligence">AI</abbr> sandboxes and testbeds (sections 6.5 and 6.10), we have been focusing on building from the existing strengths of UK regulators by establishing a pilot advisory service for <abbr title="artificial intelligence">AI</abbr> innovators through the <abbr title="Digital Regulation Cooperation Forum">DRCF</abbr>, sharing guidance on implementation, and building common regulator capability.</p> <p>159. Alongside our work to quickly deliver on the centralised risk assessment and regulatory capability and coordination activities, the UK has led the way in convening world leaders at the first ever <abbr title="artificial intelligence">AI</abbr> Safety Summit in order to establish an aligned approach to the most pressing risks related to the cutting-edge of <abbr title="artificial intelligence">AI</abbr> technology. Countries agreed to the Bletchley Declaration at the <abbr title="artificial intelligence">AI</abbr> Safety Summit, recognising the need for international collaboration in understanding the risks and opportunities of frontier <abbr title="artificial intelligence">AI</abbr><sup id="fnref:118" role="doc-noteref"><a href="#fn:118" class="govuk-link" rel="footnote">[footnote 118]</a></sup>. We will deliver a groundbreaking International Report on the Science of <abbr title="artificial intelligence">AI</abbr> Safety to promote an evidence-based understanding of advanced <abbr title="artificial intelligence">AI</abbr><sup id="fnref:119" role="doc-noteref"><a href="#fn:119" class="govuk-link" rel="footnote">[footnote 119]</a></sup>. Additionally, the UK, through the <abbr title="artificial intelligence">AI</abbr> Safety Institute, will collaborate with other nations, including the US, to enhance our capability to research and evaluate <abbr title="artificial intelligence">AI</abbr> risks, underscoring our ability to drive change through international coordination on this critical topic. </p> <p>160. Our work at the <abbr title="artificial intelligence">AI</abbr> Safety Summit is complemented by multilateral engagement in other <abbr title="artificial intelligence">AI</abbr>-focused forums, such as the <abbr title="Group of Seven">G7</abbr> Hiroshima process, <abbr title="Group of 20">G20</abbr>, <abbr title="United Nations">UN</abbr>, <abbr title="Global Partnership on AI">GPAI</abbr>, and Council of Europe. In multilateral engagements, we are working to leverage each forum’s strengths, expertise, and membership to prevent overlap or divergences with other regulatory systems, ensuring they are adding maximum value to global <abbr title="artificial intelligence">AI</abbr> governance discussions and the UK’s values and economic priorities. The UK is also pursuing bilateral cooperation with many partners, reflecting our commitment to interoperability and establishing international norms for responsible <abbr title="artificial intelligence">AI</abbr> innovation.</p> <h3 id="legal-responsibility-for-ai"> <span class="number">6.8. </span> Legal responsibility for <abbr title="artificial intelligence">AI</abbr> </h3> <p><strong>L1. What challenges might arise when regulators apply the principles across different <abbr title="artificial intelligence">AI</abbr> applications and systems? How could we address these challenges through our proposed <abbr title="artificial intelligence">AI</abbr> regulatory framework?</strong></p> <p><strong>L2.i. Do you agree that the implementation of our principles through existing legal frameworks will fairly and effectively allocate legal responsibility for <abbr title="artificial intelligence">AI</abbr> across the life cycle?</strong></p> <p><strong>L.2.ii. How could it be improved, if at all?</strong></p> <p><strong>L3. If you work for a business that develops, uses, or sells <abbr title="artificial intelligence">AI</abbr>, how do you currently manage <abbr title="artificial intelligence">AI</abbr> risk including through the wider supply chain? How could government support effective <abbr title="artificial intelligence">AI</abbr>-related risk management?</strong></p> <p>Summary of questions L1-L3:</p> <p>161. While respondents praised the benefits of a principles-based approach, nearly half were concerned about potential coordination issues between regulators and consistency across sectors. Some were concerned about confusing interdependencies between the <abbr title="artificial intelligence">AI</abbr> regulation framework and existing legislation. Respondents asked for sector-based guidance from regulators, compliance tools, and regulator engagement with industry. Some respondents also pointed to the importance of international alignment and collaboration.</p> <p>162. A majority of respondents disagreed that the implementation of the principles through existing legal frameworks would fairly and effectively allocate legal responsibility for <abbr title="artificial intelligence">AI</abbr> across the life cycle. Just under a third of respondents felt that the government should clarify <abbr title="artificial intelligence">AI</abbr>-related liability. However, there was not clear agreement about where liability should sit, with respondents noting a range of potential responsibilities for different actors across the <abbr title="artificial intelligence">AI</abbr> life cycle. There was repeated acknowledgement of the complexity of <abbr title="artificial intelligence">AI</abbr> value chains and the potential variations in use-cases. Some voiced concerns about gaps in existing legislation, including intellectual property, legal services, and employment law.</p> <p>163. Around a quarter of respondents to L2.ii stated that new legislation and regulatory powers would be necessary to effectively allocate liability across the life cycle. Respondents stressed the importance of a legally responsible person for <abbr title="artificial intelligence">AI</abbr> within organisations, with a few suggestions of an <abbr title="artificial intelligence">AI</abbr> equivalent to Data Protection Officers. Some respondents wanted more detail on how the principles will be implemented through existing law, with a few recommending that regulatory guidance would clarify the landscape. A small number of respondents noted that the proposed central functions, including risk assessment, horizon scanning, and monitoring and evaluation, would help assess and adapt the framework to ensure that legal responsibility for new <abbr title="artificial intelligence">AI</abbr>-related risks is adequately distributed. A couple of respondents also suggested pre-deployment measures such as licensing and pre-market approvals.</p> <p>164. Nearly half of organisations that responded to L3 told us that they used risk assessment processes for <abbr title="artificial intelligence">AI</abbr>, with many building from sectoral best practice or trade body guidance. Respondents pointed to existing legal frameworks that capture <abbr title="artificial intelligence">AI</abbr>-related risks, such as product safety and data protection laws, and stressed that any future <abbr title="artificial intelligence">AI</abbr> measures should avoid duplicating or contradicting existing rules. Respondents suggested that it would be useful for businesses to understand the government’s view on <abbr title="artificial intelligence">AI</abbr>-related best practices, with some recommending a central guide on using <abbr title="artificial intelligence">AI</abbr> safely. Some smaller businesses asked for targeted support to implement the <abbr title="artificial intelligence">AI</abbr> principles.</p> <p>165. Respondents consistently stressed the importance of transparency as a tool for education, awareness, consent, and contestability. Echoing answers to questions Q2 and F1, many respondents mentioned that organisations should be transparent about <abbr title="artificial intelligence">AI</abbr> use, outputs, and training data.</p> <p>Response:</p> <p>166. We are pleased to note respondents’ broad support for a principles-based approach to <abbr title="artificial intelligence">AI</abbr> regulation that can provide proportionate oversight across the many potential applications and uses of <abbr title="artificial intelligence">AI</abbr> technologies. We agree with respondents that, as we implement the framework set out in the white paper, it is important to coordinate between regulators, sectors, existing legal frameworks, and the fast-moving international regulatory landscape. That is why we have been working at pace to establish the activities of the central function outlined in the white paper (for a detailed overview see section 5.1.2).</p> <p>167. We note that there are still questions regarding how to fairly and effectively allocate legal responsibility for <abbr title="artificial intelligence">AI</abbr> across the life cycle. We also recognise that many responses endorsed further government intervention to ensure the fair and effective allocation of liability across the <abbr title="artificial intelligence">AI</abbr> value chain. Responses stressed the complexity and variability of <abbr title="artificial intelligence">AI</abbr> supply chains, with use-cases highlighting expansive ethical and technical questions. We agree that there is no easy answer to the allocation of legal responsibility for <abbr title="artificial intelligence">AI</abbr> and we also agree that it is important to get liability and accountability for <abbr title="artificial intelligence">AI</abbr> right in order to support innovation and public trust. Building on the commitment to examine foundation models in the white paper, we have focused our initial life cycle accountability work on highly capable general-purpose systems (for details see section 5.2).</p> <p>168. We are also continuing to analyse how existing legal frameworks allocate accountability and legal responsibility for <abbr title="artificial intelligence">AI</abbr> across the life cycle. Our initial analysis suggests that a context-based approach to regulating <abbr title="artificial intelligence">AI</abbr> may not adequately address risks arising from highly capable general-purpose systems since a context-based approach does not effectively and fairly allocate accountability to developers of those systems. We are exploring a range of potential obligations targeted at the developers of these systems including those suggested by respondents such as pre-market permits, model licensing, accountability and governance frameworks, transparency measures, and changes to existing legal frameworks. As we continue to iterate the <abbr title="artificial intelligence">AI</abbr> regulation framework, we will consider introducing measures to effectively allocate accountability and fairly distribute legal responsibility to those in the life cycle best able to mitigate <abbr title="artificial intelligence">AI</abbr>-related risks.</p> <p>169. We are encouraged by the wide range of risk assessment and management processes that respondents told us they are already using. Our “Emerging processes for frontier <abbr title="artificial intelligence">AI</abbr> safety” paper outlines a set of practices to inform the development of organisational <abbr title="artificial intelligence">AI</abbr> safety policies<sup id="fnref:120" role="doc-noteref"><a href="#fn:120" class="govuk-link" rel="footnote">[footnote 120]</a></sup>. It provides a snapshot of promising ideas and associated practices in <abbr title="artificial intelligence">AI</abbr> safety today. As discussed in response to questions on the cross-sectoral principles (section 6.1), we acknowledge the broad support for measures on transparency and we will continue our work assessing whether and which measures provide the most meaningful transparency for <abbr title="artificial intelligence">AI</abbr> end users and actors across the <abbr title="artificial intelligence">AI</abbr> life cycle.</p> <h3 id="foundation-models-and-the-regulatory-framework"> <span class="number">6.9. </span> Foundation models and the regulatory framework</h3> <p><strong>F1. What specific challenges will foundation models such as large language models (<abbr title="large language models">LLMs</abbr>) or open-source models pose for regulators trying to determine legal responsibility for <abbr title="artificial intelligence">AI</abbr> outcomes?</strong></p> <p><strong>F2. Do you agree that measuring compute provides a potential tool that could be considered as part of the governance of foundation models?</strong></p> <p><strong>F3. Are there other approaches to governing foundation models that would be more effective?</strong></p> <p>Summary of questions F1-F3:</p> <p>170. While respondents supported the <abbr title="artificial intelligence">AI</abbr> regulation framework set out in the white paper, many were concerned that foundation models may warrant a bespoke regulatory approach. Some respondents noted that foundation models are characterised by their technical complexity and stressed their potential to underpin many different applications across multiple sectors. Nearly a quarter of respondents emphasised that foundation models make it difficult to determine legal responsibility for <abbr title="artificial intelligence">AI</abbr> outcomes and shared hypothetical use-cases where both upstream and downstream actors are at fault. Respondents stressed that technical opacity, complex supply chains, and information asymmetries prevent sufficient explainability, accountability, and risk assessment for foundation models.</p> <p>171. Around a fifth of respondents expressed concerns about how foundation models use data, including whether data is of adequate quality, appropriate for downstream applications, compliant with existing law, and sourced ethically. Some stated that it is not clear who is responsible for deciding whether or not data is appropriate to a given application. Respondents stressed that training data currently lacks a clear definition, technical standards, and benchmark measurements.</p> <p>172. Some respondents noted concerns regarding wider access to <abbr title="artificial intelligence">AI</abbr>, including open source, leaking, or malicious use of models. However, a similar number of respondents noted the importance of open source to <abbr title="artificial intelligence">AI</abbr> innovation, transparency, and trust.</p> <p>173. Half of respondents felt compute was an inadequate proxy for governance requirements, with some recommending assessing models by their capabilities and applications instead. Respondents felt that model verification measures, such as audits and evaluations, would be effective, with some suggesting these should be mandatory requirements. A few noted the importance of downstream monitoring or post-market surveillance.</p> <p>174. About a third of respondents supported governance measures including tools for trustworthy <abbr title="artificial intelligence">AI</abbr> such as technical standards and assurance. One respondent suggested a pre-deployment sandbox. A few supported moratoriums, bans, or limits. A small number of respondents suggested that contracts, licences, user agreements, and (cyber) security measures could be used to govern foundation models.</p> <p>Response:</p> <p>175. We acknowledge the range of challenges that respondents have raised in regard to foundation models and note the particular attention given to the core characteristics or features of foundation models such as technical opacity and complexity. We also recognise that challenges arise from the fact that foundation models can be broad in their potential applications and, as such, can cut across sectors and impact upon a range of risks. Our analysis shows that many regulators can struggle to enforce existing rules and laws on the developers of highly capable general-purpose <abbr title="artificial intelligence">AI</abbr> systems within their current statutory remits in a way that effectively mitigates risk.</p> <p>176. In response to repeated calls for specific regulatory interventions targeted at foundation models, we have been exploring the impact of foundation models on life cycle accountability for <abbr title="artificial intelligence">AI</abbr>. In the <abbr title="artificial intelligence">AI</abbr> regulation white paper, we stated that legal responsibility for <abbr title="artificial intelligence">AI</abbr> should sit with the actor best able to mitigate any potential risks it poses. Our assessment suggests that, despite their ability to mitigate risks when designing and developing <abbr title="artificial intelligence">AI</abbr>, the organisations building highly capable general-purpose systems are currently unlikely to be impacted by existing rules and laws in a way that sufficiently mitigates risk. That is why we are exploring options for targeted, proportionate interventions focusing on these systems and the risks that they present. We have been assessing measures to mitigate risk during the design, training, and development of highly capable general-purpose systems. We have also been exploring options for ensuring effective accountability, including legally mandated obligations, while avoiding cumbersome red-tape.</p> <p>177. We note respondent views that compute is an imperfect proxy for foundation model capability. As part of our work exploring the right guardrails for highly capable general-purpose systems, we are examining how best to scope any regulatory requirements based on model capabilities, and the risks associated with these, wherever possible. But we recognise that, in some cases, controls might need to be in place before a model’s capability is known. In these cases, limited and careful use of proxies may be necessary to target regulatory requirements to only those systems that pose the most significant potential risks. Our early analysis indicates that initial thresholds could be based on forecasts of capabilities using a combination of two proxies: compute and capability benchmarking. However there might need to be a range of thresholds. For more detail, see section 5.2.</p> <p>178. To provide greater clarity on best practices for responsible <abbr title="artificial intelligence">AI</abbr> innovation – including using data – we published a set of emerging safety processes for frontier <abbr title="artificial intelligence">AI</abbr> companies for the <abbr title="artificial intelligence">AI</abbr> Safety Summit in 2023<sup id="fnref:121" role="doc-noteref"><a href="#fn:121" class="govuk-link" rel="footnote">[footnote 121]</a></sup>. The document consolidates emerging thinking in <abbr title="artificial intelligence">AI</abbr> safety and has been written for <abbr title="artificial intelligence">AI</abbr> organisations and those who want to better understand their safety policies. We will update this guide by the end of the year and continue to encourage <abbr title="artificial intelligence">AI</abbr> companies to develop best practices (see section 5.2.2 for detail).</p> <p>179. We acknowledge respondents’ views on both the value and risks of open source <abbr title="artificial intelligence">AI</abbr>. Open access can provide wide benefits, including helping to mitigate some of the risks caused by highly capable general-purpose systems. However, open release can also exacerbate the risk of misuse. We believe that all powerful and potentially dangerous systems should be thoroughly risk-assessed before being released. We will continue to monitor and assess the impacts of open model access on risk. We will also carefully consider the impact of any potential measures to regulate open source systems on competition, innovation, and wider risk mitigation.</p> <p>180. As set out in section 5.2, we will continue our technical policy analysis to refine our thinking on highly capable general-purpose systems in the context of <abbr title="artificial intelligence">AI</abbr> regulation and life cycle accountability. We will continue to engage with external experts on a range of challenging topics such as how effective voluntary measures could be at mitigating risks and the right scope of any additional regulatory interventions including proxies and capability thresholds. We will also continue to examine questions related to accountability and liability, including the extent to which existing laws and regulators can “reach” through the value chain to target the developers of highly capable general-purpose systems and the potential impact of open release. We will also engage with regulators to learn from their existing work on this topic. For example, we will continue to engage with the <abbr title="Competition and Markets Authority">CMA</abbr> on their work on foundation models.</p> <h3 id="ai-sandboxes-and-testbeds"> <span class="number">6.10. </span> <abbr title="artificial intelligence">AI</abbr> sandboxes and testbeds</h3> <p><strong>S1. To what extent would the sandbox models described in section 3.3.4 support innovation?</strong></p> <p><strong>S2. What could government do to maximise the benefit of sandboxes to <abbr title="artificial intelligence">AI</abbr> innovators?</strong></p> <p><strong>S3. What could government do to facilitate participation in an <abbr title="artificial intelligence">AI</abbr> regulatory sandbox?</strong></p> <p><strong>S4. Which industry sectors or classes of product would most benefit from an <abbr title="artificial intelligence">AI</abbr> sandbox?</strong></p> <p>Summary of questions S1-S4:</p> <p>181. Overall, respondents were strongly supportive of a regulatory sandbox for <abbr title="artificial intelligence">AI</abbr>. The highest proportion of respondents agreed that the “multiple sector, multiple regulator” and “single sector, multiple regulator” sandbox models would be most likely to support innovation, stating that the cross-sectoral or cross-regulator basis would help develop effective guidance in response to live issues, harmonise rules, and coordinate implementation of the <abbr title="artificial intelligence">AI</abbr> regulation framework. While there was no majority consensus on a specific sector that would most benefit from a sandbox, the largest proportion of question respondents stated that healthcare and medical devices would most benefit from an <abbr title="artificial intelligence">AI</abbr> sandbox, followed by financial services and transport.</p> <p>182. Some respondents suggested collaborating with the wider <abbr title="artificial intelligence">AI</abbr> ecosystem to maximise the benefit of sandboxes to <abbr title="artificial intelligence">AI</abbr> innovators. Many recommended building on the existing strengths of the UK regulatory landscape, such as the <abbr title="Digital Regulation Cooperation Forum">DRCF</abbr>. Linked to this, a few respondents noted that an <abbr title="artificial intelligence">AI</abbr> regulatory sandbox presents an opportunity for the UK to demonstrate global leadership in <abbr title="artificial intelligence">AI</abbr> regulation and technical standards by sharing findings and best practice internationally.</p> <p>183. Some respondents recommended making information accessible to maximise the benefit of the sandbox to participants and the wider <abbr title="artificial intelligence">AI</abbr> ecosystem. Respondents wanted participation pathways, training, tools, and other resources to be technically and financially accessible. Many respondents noted that accessible guidance and tools would allow organisations to engage with the sandbox. In particular, respondents emphasised the benefits of accessible information for smaller businesses and start-ups who are new to the regulatory process. Respondents advocated for regular reporting on sandbox processes, evidence, findings, and outcomes to encourage “business-as-usual” best practices for <abbr title="artificial intelligence">AI</abbr> across the wider ecosystem.</p> <p>184. Respondents noted the importance of reducing the administrative burden on smaller businesses and start-ups to lower the barrier to entry for those with less organisational resources. Some noted that financial support would help ensure that smaller businesses and start-ups could participate in resource-intensive research and development focused <abbr title="artificial intelligence">AI</abbr> sandboxes. Respondents felt that sharing evidence, guidance, and tools would ensure the wider <abbr title="artificial intelligence">AI</abbr> ecosystem benefitted from the sandbox. Some suggested access to datasets or product accreditation schemes would incentivise participation in supervised test environment sandboxes.</p> <p>Response:</p> <p>185. The response to the consultation – which aligns with independent research commissioned through the Regulators’ Pioneer Fund – has helped to inform the government’s decision to fund a pilot multi-regulator advisory service offered by the <abbr title="Digital Regulation Cooperation Forum">DRCF</abbr>: the <abbr title="artificial intelligence">AI</abbr> and Digital Hub. In particular, it has helped to clarify that a new regulatory service is likely to add most value supporting <abbr title="artificial intelligence">AI</abbr> innovators from a range of sectors to navigate the multiple regulatory regimes that govern the use of cross-cutting <abbr title="artificial intelligence">AI</abbr> products and services, rather than through targeting one specific regulatory remit or regulated sector.</p> <p>186. The <abbr title="Digital Regulation Cooperation Forum">DRCF</abbr> <abbr title="artificial intelligence">AI</abbr> and Digital Hub brings together four of the most critical regulators of <abbr title="artificial intelligence">AI</abbr> and digital technologies, including the <abbr title="Competition and Markets Authority">CMA</abbr>, <abbr title="Information Commissioner's Office">ICO</abbr>, <abbr title="Office of Communications">Ofcom</abbr>, and the Financial Conduct Authority (<abbr title="Financial Conduct Authority">FCA</abbr>). Together these regulators are responsible for overseeing some of the most significant regulatory regimes that govern <abbr title="artificial intelligence">AI</abbr> products, whether cross-economy (data protection, competition and consumer regulation) or sectoral (financial services, telecommunications and broadcasting).</p> <p>187. Respondents to the consultation also emphasised the importance of making information and resources relating to the sandbox accessible in order to maximise its benefits. Respondents noted the need to reduce the compliance burden for smaller businesses and start-ups in particular. Again, these considerations are central to the design and operation of the <abbr title="Digital Regulation Cooperation Forum">DRCF</abbr> <abbr title="artificial intelligence">AI</abbr> and Digital Hub. In addition to providing tailored support to participating innovators that will be accessed via a simple online application process, the Hub will also publish anonymised case-studies and guidance to support a broader pool of innovators facing similar compliance challenges. Our research has indicated that a repository of use cases such as this will be a particularly effective means of amplifying the outreach and impact of such a pilot.</p> <p>188. We note that some respondents suggested that additional incentives such as product accreditation or access to data would encourage participation in a sandbox for <abbr title="artificial intelligence">AI</abbr>. These additional incentives would best suit a supervised test environment sandbox model. As the <abbr title="Digital Regulation Cooperation Forum">DRCF</abbr>’s <abbr title="artificial intelligence">AI</abbr> and Digital Hub pilot phase will focus on providing compliance support, these additional incentives will not be included. However, we are committed to reviewing how the service needs to develop – and what further measures are necessary to support <abbr title="artificial intelligence">AI</abbr> and digital innovators – in the light of the pilot findings and further feedback from stakeholders.</p> <h2 id="annex-a-method-and-engagement">Annex A: method and engagement</h2> <h3 id="consultation-method-and-engagement-summary">Consultation method and engagement summary</h3> <p>1. With the publication of the <abbr title="artificial intelligence">AI</abbr> regulation white paper on 29 March 2023, we held a formal 12-week public consultation that closed on 21 June 2023. In total, we heard from over 545 different individuals and organisations.</p> <p>2. Stakeholders were invited to submit evidence in response to 33 questions on the government’s policy proposals for a regulatory framework for <abbr title="artificial intelligence">AI</abbr>. Stakeholders were invited to submit evidence through an online survey, email, or post. In total, we received 409 responses in writing. Removing 50 duplicates and blanks left 359 written submissions. See <strong>Written submissions</strong> below for more detail.</p> <p>3. We also proactively engaged with 364 individuals through roundtables, technical workshops, bilaterals, and a programme of on-going regulator engagement. Our roundtables sought the views of stakeholders that we might hear from less often with topics including the impact of <abbr title="artificial intelligence">AI</abbr> on marginalised communities, public trust, and citizen perspectives. We also held roundtables focused on smaller businesses and the open source community. More detail can be found in the <strong>Engagement method</strong> and <strong>Engagement findings</strong> sections below.</p> <h3 id="method-for-analysing-written-submissions">Method for analysing written submissions</h3> <p>4. We received written consultation responses from organisations and individuals through an online survey and email. Of the total 409 responses, we received 232 through our online survey and 177 by email.</p> <p>5. Of the 33 questions, 12 were closed questions with predefined response options on the online survey. We manually coded submissions by email that explicitly responded to these closed questions to follow the Likert-scale structure. The remaining 21 questions invited free text qualitative responses and each response was individually analysed and manually coded. As such, quantitative analysis represents all stakeholders who answered a specific question through email or the online survey. Not all respondents answered every question and we present our findings as an approximate proportion of responses to the question.</p> <p>6. In accordance with our privacy notice<sup id="fnref:122" role="doc-noteref"><a href="#fn:122" class="govuk-link" rel="footnote">[footnote 122]</a></sup> and online survey privacy agreement, only those individuals and organisations who submitted evidence through our online survey and consented to our privacy agreement will have their names published in the list of respondents (see Annex B).</p> <p>7. Respondents to the online survey self-selected an organisation type and sector. We manually assigned organisation types and sectors to respondents who submitted written evidence through email. After removing blanks and duplications, we received responses from across 8 organisation types and 18 sectors. Chart M1 shows response numbers by organisation type. The majority of responses came from industry, business, trade unions, and trade associations. This is followed by individuals not representing an organisation and then research groups, universities, and think tanks.</p> <p>8. <strong>Chart M1: <abbr title="artificial intelligence">AI</abbr> regulation white paper consultation respondents by organisation type</strong></p> <table class="js-barchart-table mc-stacked mc-auto-outdent"> <thead> <tr> <th scope="col">Department</th> <th scope="col">Total</th> <td></td> </tr> </thead> <tbody> <tr> <td>Industry, business, trade union, association</td> <td>132</td> <td> </td> </tr> <tr> <td>Individuals</td> <td>63</td> <td> </td> </tr> <tr> <td>Research organisation, university, think tank</td> <td>39</td> <td> </td> </tr> <tr> <td>Small or Medium sized Enterprises (<abbr title="small and medium-sized enterprises">SMEs</abbr>)</td> <td>37</td> <td> </td> </tr> <tr> <td>Charity, non-profit, social, civic, activist</td> <td>28</td> <td> </td> </tr> <tr> <td>Other</td> <td>24</td> <td> </td> </tr> <tr> <td>Regulators</td> <td>23</td> <td> </td> </tr> <tr> <td>legal services or professional advisory body</td> <td>13</td> <td> </td> </tr> </tbody> </table> <p>9. <strong>Chart M2: <abbr title="artificial intelligence">AI</abbr> regulation white paper consultation respondents by sector</strong></p> <table class="js-barchart-table mc-stacked mc-auto-outdent"> <thead> <tr> <th scope="col">Sector</th> <th scope="col">Total</th> <td></td> </tr> </thead> <tbody> <tr> <td> <abbr title="artificial intelligence">AI</abbr>, digital, and technology</td> <td>75</td> <td> </td> </tr> <tr> <td>Other</td> <td>69</td> <td> </td> </tr> <tr> <td>Arts and entertainment</td> <td>31</td> <td> </td> </tr> <tr> <td>Financial services and insurance</td> <td>22</td> <td> </td> </tr> <tr> <td>Education</td> <td>22</td> <td> </td> </tr> <tr> <td>Research and development</td> <td>22</td> <td> </td> </tr> <tr> <td>Healthcare</td> <td>20</td> <td> </td> </tr> <tr> <td>Legal services</td> <td>19</td> <td> </td> </tr> <tr> <td>IT</td> <td>18</td> <td> </td> </tr> <tr> <td>Undiclosed</td> <td>14</td> <td> </td> </tr> <tr> <td>Public sector</td> <td>14</td> <td> </td> </tr> <tr> <td>Regulation</td> <td>13</td> <td> </td> </tr> <tr> <td>Communications</td> <td>8</td> <td> </td> </tr> <tr> <td>Secondary sectors</td> <td>5</td> <td> </td> </tr> <tr> <td>Transportation</td> <td>4</td> <td> </td> </tr> <tr> <td>Real estate</td> <td>2</td> <td> </td> </tr> <tr> <td>Primary sectors</td> <td>1</td> <td> </td> </tr> </tbody> </table> <p>M2 Note: Primary sectors include extraction of raw materials, farming, and fishing. Secondary sectors include utilities, construction, and manufacturing.</p> <p>10. The sector breakdown in Chart M2 shows that the biggest number of responses came from the <abbr title="artificial intelligence">AI</abbr>, digital, and technology industry. This was followed by respondents who selected “other” and then those in the arts and entertainment sector. Further analysis of “other” responses suggests that these responses were often from individuals not representing an organisation and included students.</p> <p>11. As these demographics indicate, this sample, as with all written consultation samples, may not be representative of public opinion as some groups are over or under represented.</p> <p>12. In particular, we note that responses received from a number of creative industries stakeholders were either identical or very similar. These responses largely focused on <abbr title="artificial intelligence">AI</abbr> and copyright. These responses were analysed and included in the same way as all other responses.</p> <p>13. 89 emailed pieces of evidence followed the question structure of our online survey. These were analysed alongside responses from the survey to inform quantitative analysis. After removing duplicate responses, we included 66 emailed responses in our analysis.</p> <p>14. 88 emailed responses provided evidence beyond the scope of our consultation questions or without explicit reference to the questions. We analysed these submissions individually. While our findings from this analysis informs our overall response, we do not include these responses within our quantitative analysis as they do not explicitly answer our consultation questions. Where relevant, we have used insights from these responses to inform our qualitative question summaries. After removing duplicate responses, we included 84 of these in our qualitative analysis.</p> <p>15. We received 33 duplicate responses that were sent twice through either the online survey or email. We received requests for 4 of these duplications to be deleted on grounds they were incorrect and superseded by a later response. These duplicates were removed from analysis entirely. The remaining 29 duplicates were responses sent by both online survey and email. Where appropriate, we removed either the email or survey response from our quantitative analysis to avoid skewing counts with duplicate submissions. However, in consideration of additional detail given, we analysed both responses to weave any additional insights into our overall qualitative analysis. A further 17 written responses were discounted from analysis entirely on the grounds that they were blank or contained spam. After reviewing and cross-checking responses, we discounted 50 written submissions from the final analysis to avoid overcounting blanks, spam, and duplicate responses. That left 359 submissions of which 209 were received through the online survey and 150 by email.</p> <p>16. We use illustrative qualitative language such as “many”, “some”, and “a few” to summarise the written responses we received to our consultation. These descriptions are intended to provide an indication of the extent that a particular theme or sentiment was raised by respondents. Not all respondents answered every question. We refer to approximate amounts of respondents to each question, including “a half”, “a quarter”, or “a third”. We use the terms “nearly all” or “most” when a substantial majority of respondents made a particular argument or shared a sentiment. We use the terms “a majority” or “over half” to show when a point was shared by over 50% of respondents. We use “many” when lots of respondents raised a similar point but the theme or sentiment was not shared by over half of respondents. We use “some” to indicate when a theme or sentiment was shared by between a tenth and a fifth of respondents. We use “a few” when a smaller number of respondents made a similar point. We use a “small number” to describe when less than 10 respondents raised a point, specifying if this is “one” or “two” (“a couple”).</p> <h3 id="engagement-method">Engagement method</h3> <p>17. We held 19 roundtables engaging 278 individuals representing a range of perspectives and organisation types including <abbr title="artificial intelligence">AI</abbr> industry, digital, and technology organisations, small businesses and start-ups, companies that use <abbr title="artificial intelligence">AI</abbr>, the open source community, trade bodies and unions, legal services, financial services, creative industries, academics, think tanks, research organisations, regulators, government departments, the public sector, charities and advocacy groups, citizens, marginalised communities, and wider civil society.</p> <p>18. Some roundtables focused on hearing from regulators or stakeholders within a specific sector, including education, transport, financial services, legal services, and health and social care. Others focused on technical elements of the regulatory framework such as methods for <abbr title="artificial intelligence">AI</abbr> verification, liability, and tools for trustworthy <abbr title="artificial intelligence">AI</abbr>, including technical standards. Some discussions were designed to understand the views of stakeholders we might hear from less often: one explored the impact of <abbr title="artificial intelligence">AI</abbr> on marginalised communities, another examined the role of public trust, two further roundtables focused on the perspectives of small businesses and the open source community, and the Minister for <abbr title="artificial intelligence">AI</abbr> and Intellectual Property, Viscount Camrose, chaired a citizens roundtable during London Tech Week. Other topics included <abbr title="artificial intelligence">AI</abbr> safety, international interoperability, approaches to responsible <abbr title="artificial intelligence">AI</abbr> innovation in industry, and the <abbr title="UK Research and Innovation">UKRI</abbr>’s <abbr title="artificial intelligence">AI</abbr> Technology Mission.</p> <p>19. We are grateful to the partners who worked with us to organise roundtables and workshops including <abbr title="Centre for Data Ethics and Innovation">CDEI</abbr>, the Department for Education (<abbr title="Department for Education">DfE</abbr>), the Department of Health and Social Care (<abbr title="Department of Health and Social Care">DHSC</abbr>), the Department for Transport (DfT), the Ministry of Justice (MOJ), UK Research and Innovation (<abbr title="UK Research and Innovation">UKRI</abbr>), the British Computer Society (<abbr title="British Computer Society">BCS</abbr>), Hogan Lovells, Innovate Finance, the Ada Lovelace Institute, the Alan Turing Institute, Open UK, the British Standards Institution (<abbr title="British Standards Institution">BSI</abbr>), and the University of Bath <abbr title="Accountable, Responsible and Transparent Artificial Intelligence">ART-AI</abbr>.</p> <p>20. Alongside this programme of roundtable discussions and technical workshops, we engaged with 42 stakeholders through external engagements where we presented the <abbr title="artificial intelligence">AI</abbr> regulation framework outlined in the white paper. We also held 28 bilaterals and held meetings with 16 regulators as part of our on-going work to support implementation. We include insights from this engagement throughout the consultation response.</p> <h3 id="engagement-findings">Engagement findings</h3> <p>21. In this section, we provide a brief overview of our roundtables and workshops, summarising insights into four areas based on roundtable focus and participation from:</p> <ul> <li>regulators.</li> <li>industry.</li> <li>civil society.</li> <li>research organisations.</li> </ul> <h4 id="regulators">Regulators</h4> <p>22. We held six roundtables with regulators to understand existing capabilities and needs, including how the approach set out in the <abbr title="artificial intelligence">AI</abbr> regulation white paper would be implemented into specific sectors including health and social care, justice, education, and transport.</p> <p>23. Regulators reported varying levels of in-house <abbr title="artificial intelligence">AI</abbr> knowledge and capability, with most supporting central measures to enhance technical expertise. Some agreed that a pool of expertise could enhance regulatory capacity, while others suggested that the proposed central function could provide guidance and training materials for regulators.</p> <p>24. Regulators were broadly supportive of the central function outlined in the white paper, emphasising that they could serve as a useful point of contact for regulators. However, regulators also stressed that the central function should not infringe on the independence or existing statutory remits of regulators, suggesting that any guidance to regulators on the implementation of the principles should not impede, duplicate, or contradict regulators’ current mandates and work.</p> <p>25. Participants at the roundtables emphasised that regulators need adequate resources, endorsing government investment in technical capability and capacity. Some noted that the government may also need to introduce new regulatory powers in order for the framework to be effective, stating that achieving meaningful transparency and contestability may require the government to mandate disclosure from developers and deployers of <abbr title="artificial intelligence">AI</abbr> at set points.</p> <p>26. Participants raised several challenges to effective regulator oversight specific to <abbr title="artificial intelligence">AI</abbr> including unknown and changing functional boundaries, technical obscurity, unpredictable environments, lack of human oversight or input, and highly iterative technological life cycles. Regulators suggested that collaboration between regulators, safety engineers, and <abbr title="artificial intelligence">AI</abbr> experts is key to creating robust verification measures that prevent, reduce, and mitigate risks.</p> <p>27. While regulators stated that the principles provide useful common ground across sectors, they noted that sector-specific analysis would be necessary to identify gaps in the framework. Some noted that sector specific use-cases would help regulators apply the principles in their respective domains.</p> <h4 id="industry">Industry</h4> <p>28. We heard from a range of industry stakeholders at seven roundtable events with topics ranging from international interoperability, responsible <abbr title="artificial intelligence">AI</abbr> in industry, general-purpose <abbr title="artificial intelligence">AI</abbr>, and governance and technical standards needs.</p> <p>29. Some participants were concerned that market imbalances were preventing innovation and competition across the <abbr title="artificial intelligence">AI</abbr> ecosystem. In particular, participants argued that more accessible, traceable, and accountable data would promote innovation, noting that smaller companies often have to rely on existing market leaders or lower quality datasets due to the lack of affordable commercial, proprietary datasets. Participants suggested that clear standards for data and more equitable access to higher quality datasets would stimulate <abbr title="artificial intelligence">AI</abbr> innovation across the wider ecosystem and prevent incumbent advantages.</p> <p>30. Participants also noted that some of the potential measures to regulate <abbr title="artificial intelligence">AI</abbr> could allow current market leaders to further entrench their advantages and increase existing market imbalances. Participants noted that smaller businesses and the open source community could face a significant compliance burden, with some suggesting that regulatory sandboxes should be used to test the impact of regulation. While some suggested that legal responsibility for <abbr title="artificial intelligence">AI</abbr> should be allocated to earlier stages in the life cycle, others warned that placing the legal responsibility for downstream applications on open source developers would severely limit innovation as they would not be able to account for the many potential uses of open source code.</p> <p>31. There was no consensus on whether licensing requirements for foundation models would effectively encourage responsible <abbr title="artificial intelligence">AI</abbr> innovation or, instead, concentrate market power among a few established companies. A few participants noted that practical guidance on implementation and use-cases would support organisations to apply the principles. Some participants noted a licensing framework that only allowed open access to some parts of an <abbr title="artificial intelligence">AI</abbr> system’s code could retain some of the benefits of the information sharing and transparency that defines open source.</p> <p>32. Some participants stated that it is not clear whose job it is to regulate <abbr title="artificial intelligence">AI</abbr>, advocating for a new, <abbr title="artificial intelligence">AI</abbr>-specific regulator or a clear lead regulator. Participants emphasised the importance of technical expertise to effective regulation.</p> <p>33. Participants also noted the important role of international interoperability, insurance, technical standards, and transparency in market success for <abbr title="artificial intelligence">AI</abbr>.</p> <h4 id="civil-society-and-public-trust">Civil society and public trust</h4> <p>34. Three roundtables were held with smaller businesses, civil society stakeholders, and special interest groups to discuss public trust and the impact of <abbr title="artificial intelligence">AI</abbr> on citizens and marginalised communities.</p> <p>35. Participants emphasised that fairness and inclusivity were key to realising the benefits of <abbr title="artificial intelligence">AI</abbr> for everyone. Participants noted the importance of diversity in regard to the data used to train and build <abbr title="artificial intelligence">AI</abbr>, as well as the teams who develop, deploy, and regulate <abbr title="artificial intelligence">AI</abbr>. Participants suggested co-creation and collaboration with marginalised communities would ensure that <abbr title="artificial intelligence">AI</abbr> could create benefits for everyone.</p> <p>36. Participants also stressed that organisations using <abbr title="artificial intelligence">AI</abbr> not only need to be transparent about when and how <abbr title="artificial intelligence">AI</abbr> is used but should also make explanations accessible to different groups. Participants noted that, while <abbr title="artificial intelligence">AI</abbr> can offer benefits to marginalised communities, these populations often face a disproportionate negative impact from <abbr title="artificial intelligence">AI</abbr>. Participants called for more education on the use of <abbr title="artificial intelligence">AI</abbr> on the grounds that there is currently a significant lack of consumer awareness, organisational knowledge, and accessible redress routes.</p> <p>37. Participants noted that regulators have a key role to play in improving access to contest and seek redress for <abbr title="artificial intelligence">AI</abbr>-related harms. Participants emphasised that regulators require adequate funding and resources in order to achieve this. Participants strongly supported a central ombudsman for <abbr title="artificial intelligence">AI</abbr> to improve the accessibility of high-quality legal advice on <abbr title="artificial intelligence">AI</abbr>. Many noted that legal advice on <abbr title="artificial intelligence">AI</abbr> is currently expensive, hard to access, and sometimes given by unregulated providers outside of the legal profession. Participants also noted that the ombudsman would likely receive a large number of small-scale complaints, which they should be adequately equipped to deal with.</p> <p>38. Participants also advocated for the importance of specific safeguards for young people including potential changes to existing statutory mechanisms such as those for data protection and equality.</p> <h4 id="academia-research-organisations-and-think-tanks">Academia, research organisations, and think tanks</h4> <p>39. We held three events to hear from academics, research organisations, and think tanks on <abbr title="artificial intelligence">AI</abbr> safety, legal responsibility for <abbr title="artificial intelligence">AI</abbr>, and the <abbr title="UK Research and Innovation">UKRI</abbr>’s <abbr title="artificial intelligence">AI</abbr> Technology Mission.</p> <p>40. Participants suggested differentiating the types of risk posed by <abbr title="artificial intelligence">AI</abbr>, noting that both immediate and long term risks would need to be factored into any safety measures for <abbr title="artificial intelligence">AI</abbr>. Participants felt that sector-specific analysis should inform assessments of <abbr title="artificial intelligence">AI</abbr>-related risks. Participants noted that the technical obscurity of <abbr title="artificial intelligence">AI</abbr> can make it difficult for organisations and regulators to determine the cause of any harms that arise. Participants emphasised that, in order to prevent harms, pre-deployment measures are key to ensuring that <abbr title="artificial intelligence">AI</abbr> is safe for market release.</p> <p>41. Participants argued that high quality regulation can help <abbr title="artificial intelligence">AI</abbr> move quickly and safely from development to market. Participants argued that there was a need for greater technical knowledge across government and regulators, along with better <abbr title="artificial intelligence">AI</abbr> skills across the wider ecosystem. Some called for the certification of <abbr title="artificial intelligence">AI</abbr> engineers and developers to enhance public confidence, while another promoted the certification of institutional leads responsible for decisions related to <abbr title="artificial intelligence">AI</abbr>. There was no consensus on whether a new, central regulator for <abbr title="artificial intelligence">AI</abbr> or existing regulators would implement the proposed framework most effectively. However, participants agreed that aligning regulatory guidance and sharing expertise across sectors would build compliance capability. Participants suggested a “mixed economy” of regulation, with statutory requirements to ensure rules worked effectively.</p> <p>42. Participants noted that <abbr title="artificial intelligence">AI</abbr> life cycles are varied and complex. Participants wanted the government to define actors across the <abbr title="artificial intelligence">AI</abbr> life cycle and determine corresponding obligations to clarify the landscape. However, there was no agreement on the best way to do this with participants suggesting actors may be defined by their function (as in data protection regulation), market power or benefit (as in digital markets regulation), or proximity to and reasonable foreseeability of risks (as in product safety legislation). While some participants wanted to see more stringent responsibilities for foundation model developers, others warned that too too narrow a focus could mean that other <abbr title="artificial intelligence">AI</abbr>-related opportunities might be missed.</p> <h2 id="annex-b-list-of-consultation-respondents">Annex B: List of consultation respondents</h2> <h3 id="list-of-consultation-respondents">List of consultation respondents</h3> <p>1. We are grateful to all the individuals and organisations who shared their insights with us over the course of the consultation period.</p> <p>2. Our <abbr title="artificial intelligence">AI</abbr> regulation framework is intended to be collaborative and we will continue to work closely with regulators, academia, civil society, and the public in order to monitor and evaluate the effectiveness of our approach.</p> <p>3. In accordance with our privacy notice<sup id="fnref:123" role="doc-noteref"><a href="#fn:123" class="govuk-link" rel="footnote">[footnote 123]</a></sup> and online survey privacy agreement, only those individuals and organisations who submitted evidence through our online survey and consented to our privacy agreement there have their names listed below. The list represents the 209 online survey submissions that we analysed after cleaning the data for duplications, blanks, and spam (see Annex A for details). Names are listed as they were given, with personal names removed if an organisation name was available. We provide 207 names here as 2 responses included no name.</p> <p>4. Further detail on the organisation type and sector of those we received written responses from by email and online survey can be found in the extended method for analysing written responses in Annex A.</p> <h3 id="respondents-to-the-online-consultation-survey">Respondents to the online consultation survey</h3> <ol> <li> <p>Adarga Limited</p> </li> <li> <p>ADS Group</p> </li> <li> <p>Advai Ltd</p> </li> <li> <p>AGENCY: Assuring Citizen Agency in a World with Complex Online Harms</p> </li> <li> <p>Agile Property &amp; Homes Limited</p> </li> <li> <p><abbr title="artificial intelligence">AI</abbr> &amp; Partners</p> </li> <li> <p><abbr title="artificial intelligence">AI</abbr> Centre for Value Based Healthcare</p> </li> <li> <p>Aidan Freeman</p> </li> <li> <p>AIethics.ai</p> </li> <li> <p>Alacriter</p> </li> <li> <p>Aligned <abbr title="artificial intelligence">AI</abbr></p> </li> <li> <p>Alliance for Intellectual Property</p> </li> <li> <p>Altered Ltd</p> </li> <li> <p>Amendolara Holdings Limited</p> </li> <li> <p>Anton</p> </li> <li> <p>Arran McCutcheon</p> </li> <li> <p><abbr title="Accountable, Responsible and Transparent Artificial Intelligence">ART-AI</abbr>, University of Bath</p> </li> <li> <p>Arts Council England</p> </li> <li> <p>Association for Computing Machinery Europe Technology Policy Committee</p> </li> <li> <p>Association of British HealthTech Industries</p> </li> <li> <p>Association of Chartered Certified Accountants (<abbr title="Association of Chartered Certified Accountants">ACCA</abbr>)</p> </li> <li> <p>Association of Financial Mutuals</p> </li> <li> <p>Association of Illustrators</p> </li> <li> <p>Association of Learned and Professional Society Publishers</p> </li> <li> <p>Assuring Autonomy International Programme, University of York</p> </li> <li> <p>Avi Semelr</p> </li> <li> <p>Baringa Partners LLP</p> </li> <li> <p>Barnacle Labs</p> </li> <li> <p>Barry O’Brien</p> </li> <li> <p>Ben Hopkinson</p> </li> <li> <p>BPI British Phonographic Industry</p> </li> <li> <p>Bristows LLP</p> </li> <li> <p>British Copyright Council</p> </li> <li> <p>British Pest Control Association</p> </li> <li> <p>British Security Industry Association</p> </li> <li> <p>Brunel University London Centre for Artificial Intelligence: Social &amp; Digital Innovations</p> </li> <li> <p><abbr title="British Standards Institution">BSI</abbr> Group The Netherlands B.V.</p> </li> <li> <p>BT Group</p> </li> <li> <p>Bud Financial</p> </li> <li> <p>Calvin Karpenko</p> </li> <li> <p>Carlo Attubato</p> </li> <li> <p>Center for <abbr title="artificial intelligence">AI</abbr> and Digital Policy Washington, DC. USA</p> </li> <li> <p>Centre for Policy Studies</p> </li> <li> <p>Charlie Bowler</p> </li> <li> <p>Chegg, Inc.</p> </li> <li> <p>Cisco</p> </li> <li> <p>City, University of London</p> </li> <li> <p>Cogstack</p> </li> <li> <p>Colin Hayhurst</p> </li> <li> <p>Congenica Ltd</p> </li> <li> <p>Craig Meulen</p> </li> <li> <p>Creators’ Rights Alliance</p> </li> <li> <p>CTRL-Shift &amp; Collider Health</p> </li> <li> <p>Cyferd</p> </li> <li> <p>CyLon Ventures</p> </li> <li> <p>DACS (Design and Artists Copyright Society)</p> </li> <li> <p>Daniel Marsden</p> </li> <li> <p>Darrell Warner Limited</p> </li> <li> <p>Deborah W.A. Foulkes</p> </li> <li> <p>Deloitte UK</p> </li> <li> <p>Developers Alliance</p> </li> <li> <p>Department for Education (<abbr title="Department for Education">DfE</abbr>)</p> </li> <li> <p>Direct Line Group</p> </li> <li> <p>DNV</p> </li> <li> <p>Dr. Michael K. Cohen</p> </li> <li> <p>EasyJet Airline Company Ltd.</p> </li> <li> <p>Ed Hagger</p> </li> <li> <p>EKC Group</p> </li> <li> <p>Elliott Andrews</p> </li> <li> <p>Emily Gray</p> </li> <li> <p>Emma Ahmed-Rengers</p> </li> <li> <p>Enzai Technologies Limited</p> </li> <li> <p>Equity</p> </li> <li> <p>Eviden</p> </li> <li> <p>Experian UK&amp;I</p> </li> <li> <p>Falcon Windsor</p> </li> <li> <p>FlyingBinary</p> </li> <li> <p>ForHumanity</p> </li> <li> <p>Freeths LLP</p> </li> <li> <p>Fujitsu</p> </li> <li> <p>Full Fact</p> </li> <li> <p>Geeks Ltd.</p> </li> <li> <p>Getty Images</p> </li> <li> <p>GlaxoSmithKline plc</p> </li> <li> <p>Glenn Donaldson</p> </li> <li> <p>Global Witness</p> </li> <li> <p>Greg Colbourn</p> </li> <li> <p>Greg Mathews</p> </li> <li> <p>Guy Warren</p> </li> <li> <p>Hazy</p> </li> <li> <p>Henry</p> </li> <li> <p>Hollie</p> </li> <li> <p>Hugging Face</p> </li> <li> <p>Iain Darby</p> </li> <li> <p>International Federation of the Phonographic Industry (<abbr title="International Federation of the Phonographic Industry">IFPI</abbr>)</p> </li> <li> <p>INRO London</p> </li> <li> <p>Institute for the Future of Work</p> </li> <li> <p>Institute of Chartered Accountants in England and Wales (<abbr title="Institute of Chartered Accountants in England and Wales">ICAEW</abbr>)</p> </li> <li> <p>Institute of Innovation and Knowledge Exchange (<abbr title="Institute of Innovation and Knowledge Exchange">IKE Institute</abbr>)</p> </li> <li> <p>Institute of Physics and Engineering in Medicine</p> </li> <li> <p>Institute of Physics and Engineering in Medicine (Clinical and Scientific Computing group)</p> </li> <li> <p>Institution of Occupational Safety and Health</p> </li> <li> <p>International Federation of Journalists</p> </li> <li> <p>Jake Bailey</p> </li> <li> <p>Jake Wilkinson</p> </li> <li> <p>Japan Electronics and Information Technology Industries Association</p> </li> <li> <p>Joe Collman</p> </li> <li> <p>Johnny Luk</p> </li> <li> <p>Johnson &amp; Johnson</p> </li> <li> <p>Jonas Herold-Zanker</p> </li> <li> <p>Joseph Johnston</p> </li> <li> <p>Judith Barker</p> </li> <li> <p>Kainos Software Ltd</p> </li> <li> <p>Kelechi Ejikeme</p> </li> <li> <p>Knowledge Associates Cambridge Ltd.</p> </li> <li> <p>Labour for the Long Term</p> </li> <li> <p>Legal &amp; General Group PLC</p> </li> <li> <p>Leverhulme Centre for the Future of Intelligence</p> </li> <li> <p>Lewis</p> </li> <li> <p><abbr title="London School of Economics and Political Science">LSE</abbr> Law, Technology and Society Group</p> </li> <li> <p>Lucy Purdon</p> </li> <li> <p>Luke Richards</p> </li> <li> <p>Lumi Network</p> </li> <li> <p>Market Research Society</p> </li> <li> <p>Marta</p> </li> <li> <p>Martin Gore</p> </li> <li> <p>Mastercard Europe</p> </li> <li> <p>MedTech Europe</p> </li> <li> <p>Megha Barot</p> </li> <li> <p>Michael Fisher</p> </li> <li> <p>Michael Pascu</p> </li> <li> <p>Microsoft</p> </li> <li> <p>Mind Foundry</p> </li> <li> <p>Mukesh Sharma</p> </li> <li> <p>National Physical Laboratory</p> </li> <li> <p>National Taxpayers Union Foundation (<abbr title="National Taxpayers Union Foundation">NTUF</abbr>)</p> </li> <li> <p>National Union of Journalists</p> </li> <li> <p>NATS</p> </li> <li> <p>Nebuli Ltd.</p> </li> <li> <p>Newcastle University</p> </li> <li> <p>Newsstand</p> </li> <li> <p>Nicole Hawkesford</p> </li> <li> <p>Office for Standards in Education, Children’s Services and Skills (<abbr title="Office for Standards in Education, Children’s Services and Skills">Ofsted</abbr>)</p> </li> <li> <p>Office for Statistics Regulation</p> </li> <li> <p>Orbit RRI</p> </li> <li> <p>Paul Dunn</p> </li> <li> <p>Paul Evans</p> </li> <li> <p>Paul Ratcliffe</p> </li> <li> <p>Pearson</p> </li> <li> <p>Phrasee</p> </li> <li> <p>Pippa Robertson</p> </li> <li> <p>Planar <abbr title="artificial intelligence">AI</abbr> Limited</p> </li> <li> <p>Policy Connect</p> </li> <li> <p>Professional Publishers Association</p> </li> <li> <p>Professor Julia Black</p> </li> <li> <p>PRS for Music</p> </li> <li> <p>Publishers Association</p> </li> <li> <p>Publishers’ Licensing Services</p> </li> <li> <p>Pupils 2 Parliament</p> </li> <li> <p>Queen Bee Marketing Hive</p> </li> <li> <p>Rebecca Palmer</p> </li> <li> <p>RELX</p> </li> <li> <p>Reset</p> </li> <li> <p>Rohan Vij</p> </li> <li> <p>Royal Photographic Society of Great Britain</p> </li> <li> <p>Salesforce</p> </li> <li> <p>SambaNova Systems inc</p> </li> <li> <p>Samuel Frewin</p> </li> <li> <p>SAP</p> </li> <li> <p>Scale <abbr title="artificial intelligence">AI</abbr></p> </li> <li> <p>ScaleUp Institute</p> </li> <li> <p>Scott Timcke</p> </li> <li> <p>Seldon</p> </li> <li> <p>Sharon Darcy</p> </li> <li> <p>Simon Kirby</p> </li> <li> <p>Skin Analytics Ltd</p> </li> <li> <p>South West Grid for Learning</p> </li> <li> <p>Stability <abbr title="artificial intelligence">AI</abbr></p> </li> <li> <p>Steve Kendall</p> </li> <li> <p><abbr title="Science and Technology Facilities Council">STFC</abbr> Hartree Centre</p> </li> <li> <p>Surrey Institute for People-Centred Artificial Intelligence</p> </li> <li> <p>Teal Legal Ltd</p> </li> <li> <p>Temple Garden Chambers</p> </li> <li> <p>The Copyright Licensing Agency Ltd</p> </li> <li> <p>The Data Lab Innovation Centre</p> </li> <li> <p>The Institute of Customer Service</p> </li> <li> <p>The Multi-Agency Advice Service (<abbr title="Multi-Agency Advice Service">MAAS</abbr>) <abbr title="artificial intelligence">AI</abbr> and Digital Regulations Service for health and social care.</p> </li> <li> <p>The Operational Research Society</p> </li> <li> <p>The Pharmacists’ Defence Association (<abbr title="Pharmacists’ Defence Association">PDA</abbr>)</p> </li> <li> <p>The Physiological Society</p> </li> <li> <p>The Publishers Association</p> </li> <li> <p>The Society of Authors</p> </li> <li> <p>The University of Winchester</p> </li> <li> <p>Tom Edward Ashworth</p> </li> <li> <p>TRANSEARCH International</p> </li> <li> <p>Trilateral Research</p> </li> <li> <p>University of Edinburgh</p> </li> <li> <p>University of Edinburgh</p> </li> <li> <p>University of Winchester</p> </li> <li> <p>Valentino Giudice</p> </li> <li> <p>ValidMind</p> </li> <li> <p>W Legal Ltd</p> </li> <li> <p>Wales Safer Communities Network (membership from Police, Fire, Local Authorities, Probation and Third Sector), hosted by WLGA</p> </li> <li> <p>Warwickshire County Council</p> </li> <li> <p>We and <abbr title="artificial intelligence">AI</abbr></p> </li> <li> <p>Workday</p> </li> <li> <p>Writers’ Guild of Great Britain</p> </li> </ol> <h2 id="annex-c-individual-question-summaries">Annex C: Individual question summaries</h2> <h3 id="the-revised-cross-sectoral-ai-principles-1">The revised cross-sectoral <abbr title="artificial intelligence">AI</abbr> principles</h3> <p><strong>1. Do you agree that requiring organisations to make it clear when they are using <abbr title="artificial intelligence">AI</abbr> would improve transparency?</strong></p> <p>1. A majority of respondents agreed that requiring organisations to make it clear when they are using <abbr title="artificial intelligence">AI</abbr> would adequately ensure transparency. Respondents who disagreed either felt labelling <abbr title="artificial intelligence">AI</abbr> use would be insufficient or disproportionately burdensome.</p> <p>2. Respondents who argued the measure would be insufficient often stated that regulators lack the relevant powers, funding, and capabilities to adequately ensure transparency. Linked to this, respondents noted issues around enforcement and access to appeal and redress. Some respondents recommended that the government should consider relevant statutory measures and accountability mechanisms. A few respondents suggested that explanations should be targeted to the context and audience.</p> <p>3. Other respondents were concerned that a blanket requirement for transparency would create a burdensome barrier for lower risk <abbr title="artificial intelligence">AI</abbr> applications. One respondent noted that the proposal assumes a single actor in the <abbr title="artificial intelligence">AI</abbr> value chain will have adequate visibility across potentially many life cycle stages and applications. A few respondents wanted to see clear thresholds (including “high-risk applications”) and guidance from the government and regulators on transparency requirements.</p> <p>4. Respondents were concerned that transparency measures may have potential interactions with existing and forthcoming legislation, such as that for data protection and intellectual property.</p> <p><strong>2. Are there other measures we could require of organisations to improve transparency for <abbr title="artificial intelligence">AI</abbr>?</strong></p> <p>5. There was strong support for a range of transparency measures from respondents. Respondents stressed that transparency was key to building public trust, accountability, and an effective and verifiable regulatory framework.</p> <p>6. Many respondents endorsed clear reporting obligations on the inputs used to build and train <abbr title="artificial intelligence">AI</abbr>. Respondents noted that transparency would be improved through the disclosure of a range of inputs, from data to compute. Echoing responses to question F1 on foundation models, concerns coalesced around whether training data was of sufficient quality, compliant with existing legal frameworks including intellectual property and data protection, and appropriate for downstream uses. A few respondents argued that compute disclosure would improve transparency on the environmental impacts of <abbr title="artificial intelligence">AI</abbr>.</p> <p>7. Many respondents also supported the labelling of <abbr title="artificial intelligence">AI</abbr> use and outputs, with many recommending the measure to improve user awareness and organisational accountability. Some respondents suggested that labelling <abbr title="artificial intelligence">AI</abbr> generated outputs would help combat <abbr title="artificial intelligence">AI</abbr> generated misinformation and promote intellectual property rights. A few respondents wanted to see clearer opt-ins for uses of data and <abbr title="artificial intelligence">AI</abbr>, with options for human alternatives.</p> <p>8. Some respondents endorsed measures that would encourage explanations for <abbr title="artificial intelligence">AI</abbr> outcomes and potential impacts. This includes measures for showing users how models produced outputs or answers as well as addressing model limitations and impacts. Similarly, a few respondents noted the importance of organisational and public education through accessible information and targeted awareness raising. A couple of respondents suggested public or organisational registers for (high risk) <abbr title="artificial intelligence">AI</abbr> would help improve awareness.</p> <p>9. While some respondents advocated for reporting on model details, many emphasised that complex technical information would be best disclosed to regulators and independent verifiers rather than the public. Respondents suggested that organisations share technical model details such as weights, parameters, uses, and testing. Respondents stated that impact and risk assessments, as well as governance and marketing decisions, should be available to either regulators or the public, with a few noting potential compromises with trade secrets. Some respondents endorsed independent assurance techniques, such as third-party audits and technical standards.</p> <p>10. A few respondents suggested clarifying legal rights and responsibilities for <abbr title="artificial intelligence">AI</abbr>, with a few of those recommending the introduction of <abbr title="artificial intelligence">AI</abbr> legislation and non-compliance measures.</p> <p><strong>3. Do you agree that current routes to contest or get redress for <abbr title="artificial intelligence">AI</abbr>-related harms are adequate?</strong></p> <p>11. Over half of respondents reported that current routes to contest or seek redress for <abbr title="artificial intelligence">AI</abbr>-related harms through existing legal frameworks are not adequate. In particular, respondents flagged that a lack of transparency around when and how <abbr title="artificial intelligence">AI</abbr> is used prevents users from being able to identify <abbr title="artificial intelligence">AI</abbr>-related harms. Similarly, respondents noted that a lack of transparency around the data used to train <abbr title="artificial intelligence">AI</abbr> models complicates data protection and prevents intellectual property rights holders from exercising their legal and moral rights. A few respondents also noted the high costs of individual litigation and advocated for clearer routes for individual and collective action.</p> <p><strong>4. How could current routes to contest or seek redress for <abbr title="artificial intelligence">AI</abbr>-related harms be improved, if at all?</strong></p> <p>12. Many respondents wanted to see the government clarify legal rights and responsibilities relating to <abbr title="artificial intelligence">AI</abbr>, though there was no consensus on how to do this. Many respondents suggested clarifying rights and responsibilities in existing law through mechanisms such as regulatory guidance. There was also a broad appetite for centralisation in different forms with some respondents advocating for the creation of a central redress mechanism such as a central <abbr title="artificial intelligence">AI</abbr> regulator, oversight body, coordination function, or lead regulator. Some respondents wanted to see further statutory requirements, such as licensing.</p> <p>13. Many respondents stressed the importance of meaningful transparency and some emphasised the need for accessible redress routes. Respondents felt that measures to show users when and how <abbr title="artificial intelligence">AI</abbr> is being used would help individuals identify when and how harms had occurred. Respondents wanted to see clear – and in some cases mandatory – routes to contest or seek redress for <abbr title="artificial intelligence">AI</abbr>-related decisions. Respondents noted issues with expensive litigation, particularly in relation to infringement of intellectual property rights. Respondents felt that increasing transparency for <abbr title="artificial intelligence">AI</abbr> systems would make redress more accessible across a broad range of potential harms and, similarly, that clarifying redress routes would improve transparency. Some respondents noted the importance of international agreements to ensure effective routes to contest or seek redress for <abbr title="artificial intelligence">AI</abbr>-related harms across borders. Measures such as moratoriums and mandatory kill switches were only raised by a few respondents.</p> <p><strong>5. Do you agree that, when implemented effectively, the revised cross-sectoral principles will cover the risks posed by <abbr title="artificial intelligence">AI</abbr> technologies?</strong></p> <p>14. A majority of respondents agreed that the principles would cover the risks posed by <abbr title="artificial intelligence">AI</abbr> technologies when implemented effectively. Respondents that disagreed tended to cite concerns around enforcement and a lack of statutory backing to the principles or wider issues around regulator readiness, including capacity, capabilities, and coordination.</p> <p>15. Respondents often noted a need for the framework to be adaptable, context-focused, and supported by monitoring and evaluation, citing the fast pace of technological change.</p> <p>16. A few respondents felt the terms of the question were unclear and asked for further detail on effective implementation.</p> <p><strong>6. What, if anything, is missing from the revised principles?</strong></p> <p>17. Many respondents advocated for the cross-sectoral <abbr title="artificial intelligence">AI</abbr> principles to more explicitly include human rights and human flourishing, noting that <abbr title="artificial intelligence">AI</abbr> should be used to improve human life. Respondents endorsed different human rights and related values including freedom, pluralism, privacy, equality, inclusion, and accessibility.</p> <p>18. Some respondents wanted further detail on the implementation of the principles. These respondents often asked for more detail on regulator capacity, noting that the “effective implementation” of the principles would require adequate regulator resource, skills, and powers. A couple of respondents asked for more clarity regarding how regulators and organisations are expected to manage trade-offs, such as explainability and accuracy or transparency and privacy.</p> <p>19. Linked to this, some respondents wanted further guidance on how the <abbr title="artificial intelligence">AI</abbr> principles would interact with and be implemented through existing legislation. Respondents mostly raised concerns in regard to data protection and intellectual property law, though a few respondents asked for a more holistic sense of the government approach to <abbr title="artificial intelligence">AI</abbr> in regard to departmental strategies, such as the Ministry of Defence’s <abbr title="artificial intelligence">AI</abbr> strategy. Some respondents stated that the principles would be ineffective without statutory backing, with a few emphasising the importance of mandating <abbr title="artificial intelligence">AI</abbr>-related accountability mechanisms.</p> <p>20. Some respondents advocated for the principles to address a range of issues related to operational resilience. These responses suggested measures for adequate security and cyber security, decommissioning processes, protecting competition, ensuring access, and addressing risks associated with over-reliance. A similar number of respondents wanted to see specific principles on data quality and international alignment.</p> <p>21. A few respondents recommended the inclusion of principles that would clearly correlate with systemic risks and wider societal impacts, sustainability, or education and literacy. In regard to systemic risks, respondents tended to raise concerns about the potential harms that <abbr title="artificial intelligence">AI</abbr> technologies can pose to democracy and the rule of law in terms of disinformation and electoral interference.</p> <h3 id="a-statutory-duty-to-regard-1">A statutory duty to regard</h3> <p><strong>7. Do you agree that introducing a statutory duty on regulators to have due regard to the principles would clarify and strengthen regulators’ mandates to implement our principles, while retaining a flexible approach to implementation?</strong></p> <p>22. Over half of respondents somewhat or strongly agreed that a statutory duty would clarify and strengthen the mandate of regulators to implement the framework. However, many noted caveats that are detailed in Q8.</p> <p><strong>8. Is there an alternative statutory intervention that would be more effective?</strong></p> <p>23. Many felt that targeted statutory measures, including expanded regulator powers, would be a more effective statutory intervention. In particular, respondents noted the need for regulators to have appropriate investigatory powers. Some also wanted to see the consequences of breaches more clearly defined. Respondents also suggested specific <abbr title="artificial intelligence">AI</abbr> legislation, a new <abbr title="artificial intelligence">AI</abbr> regulator, and strict rules about the use of <abbr title="artificial intelligence">AI</abbr> in certain contexts as more effective statutory interventions. A couple of respondents mentioned that any <abbr title="artificial intelligence">AI</abbr> duties should be on those operating within the market as opposed to on regulators. </p> <p>24. Some respondents felt the proposed statutory duty is the most effective intervention and should be implemented. However, other respondents couched their support within wider concerns that the framework would not be sufficiently enforceable without some kind of statutory backing. Nearly a quarter of respondents emphasised that regulators would need enhanced resources and capabilities in order to enact a statutory duty effectively. Other respondents felt that the implementation of a duty to regard could disrupt regulation, innovation, and trust if rushed. These respondents recommended that the duty should be reviewed after a period of non-statutory implementation, particularly to observe interactions with existing law and regulatory remits. A few respondents noted that the end goal and timeframes for the <abbr title="artificial intelligence">AI</abbr> regulatory framework were not clear, causing uncertainty.</p> <p>25. There was some support for the government to mandate measures such as third-party audits, certification, and Environmental, Social and Governance (<abbr title="Environmental, Social and Governance">ESG</abbr>) style supply chain measures, including reporting on training data. A few respondents were supportive of central monitoring to track regulatory compliance and novel technologies that may require an expansion of regulatory scope.</p> <h3 id="new-central-functions-to-support-the-framework-1">New central functions to support the framework</h3> <p><strong>9. Do you agree that the functions outlined in section 3.3.1 would benefit our <abbr title="artificial intelligence">AI</abbr> regulation framework if delivered centrally?</strong></p> <p>26. Nearly all respondents agreed that central delivery of the proposed functions would benefit the framework, with many arguing centralised activities would allow the government to monitor and iterate the framework. Many suggested that feedback from regulators, industry, academia, civil society, and the general public should be used to measure effectiveness, with some calling for regular review points to assess whether the central function remained fit for purpose. A few respondents were concerned that some of the proposed activities may already be carried out by other organisations and suggested mapping existing work to avoid duplication.</p> <p><strong>10. What, if anything, is missing from the central functions?</strong></p> <p>27. While respondents widely supported the proposed central functions, many wanted to see more detail on the delivery of each activity, with some respondents endorsing a stronger emphasis on engagement and partnerships with existing organisations.</p> <p>28. Responses highlighted the importance of addressing <abbr title="artificial intelligence">AI</abbr>-related risks and building public trust in <abbr title="artificial intelligence">AI</abbr> technologies. Some respondents suggested that the government should prioritise the proposed risk function, noting the importance of identifying and assessing risks related to <abbr title="artificial intelligence">AI</abbr>. Respondents noted that this risk analysis should include ethical risks, such as bias, and systemic risks to society, such as changes to the labour market. A few respondents emphasised that the education and awareness function would be key to building public trust.</p> <p>29. Respondents noted the importance of regulatory alignment across sectors and international regimes. Some respondents argued that the central functions should include more on interoperability, noting cyber security, disinformation, and copyright infringement as issues that will require international collaboration.</p> <p>30. Some respondents suggested that some or all of the central functions should have a statutory underpinning or be delivered by an independent body. Respondents also stressed that, to be effective, the central functions should be adequately resourced and given the necessary technical expertise. This was identified as particularly important to the risk mapping, horizon scanning, and monitoring and evaluation functions.</p> <p>31. Additional activities or functions suggested by respondents included: statutory powers to ensure the safety and security of highly capable <abbr title="artificial intelligence">AI</abbr> models; coordination with the devolved administrations; and oversight of <abbr title="artificial intelligence">AI</abbr> compliance with existing laws, including intellectual property and data protection frameworks.</p> <p><strong>11. Do you know of any existing organisations who should deliver one or more of our proposed central functions?</strong></p> <p>32. Overall, around a quarter of respondents felt that the government should deliver one or more of the central functions. Respondents also highlighted other organisations that could support the central functions, including regulators, technology-focused research institutes and think tanks, private-sector firms, and academic research groups. Many respondents advocated for the regulatory functions to build from the existing strengths of the UK’s regulatory ecosystem. Respondents noted that regulatory coordination initiatives like the Digital Regulation Cooperation Forum (<abbr title="Digital Regulation Cooperation Forum">DRCF</abbr>) could help identify and respond to gaps in regulator remits. Respondents also highlighted that think tanks and research institutes such as the Alan Turing Institute, Ada Lovelace Institute, and Institute for the Future of Work have past or existing activities that may complement those described in the proposed central functions.</p> <p><strong>12. Are there additional activities that would help businesses confidently innovate and use <abbr title="artificial intelligence">AI</abbr> technologies?</strong></p> <p>33. Many respondents felt the central functions could have further activities to support businesses to apply the principles to everyday practices related to <abbr title="artificial intelligence">AI</abbr>. Respondents argued that the government and regulators should support industry with training programs and educational resources. Respondents noted that this support would be especially important for organisations operating across or between sectors.</p> <p>34. Respondents felt that regulators should develop and regularly update guidance to allow business to innovate confidently. Respondents reported that incoherent and expensive compliance processes could stifle innovation and slow <abbr title="artificial intelligence">AI</abbr> adoption.</p> <p>35. Respondents suggested that the government could improve access to high-quality data, ensure international alignment on <abbr title="artificial intelligence">AI</abbr> requirements, and facilitate collaboration between regulators, industry, and academia. Some respondents noted that responsible <abbr title="artificial intelligence">AI</abbr> innovation is supported by access to high-quality, diverse, and ethically-sourced data. Respondents suggested that government-sponsored data trusts could help improve access to data. Some respondents saw the government playing a key role in ensuring the international harmonisation of <abbr title="artificial intelligence">AI</abbr> regulation, noting that interoperability would promote trade and competition. A few respondents suggested that the government could facilitate collaboration between regulators, industry, and academia to ensure alignment between <abbr title="artificial intelligence">AI</abbr> regulation, innovation, and research. A small number of respondents suggested introducing <abbr title="artificial intelligence">AI</abbr> legislation rather than central functions to provide greater legal certainty.</p> <p><strong>12.1. If so, should these activities be delivered by government, regulators, or a different organisation?</strong></p> <p>36. While respondents identified some activities to support businesses to confidently innovate and use <abbr title="artificial intelligence">AI</abbr> technologies that should be led by regulators, a majority of respondents suggested that these activities should be delivered by the government.</p> <p><strong>13. Are there additional activities that would help individuals and consumers confidently use <abbr title="artificial intelligence">AI</abbr> technologies?</strong></p> <p>37. Respondents prioritised transparency from the cross-sectoral principles, with nearly half arguing that individuals and consumers should be able to identify when and how <abbr title="artificial intelligence">AI</abbr> is being used by a service or organisation.</p> <p>38. Many respondents felt that education and training would build public trust in <abbr title="artificial intelligence">AI</abbr> technologies and help accelerate adoption. Respondents emphasised that <abbr title="artificial intelligence">AI</abbr> literacy should be improved through education and training that enables consumers to use <abbr title="artificial intelligence">AI</abbr> products and services more effectively. Respondents suggested training should cover all stages of the <abbr title="artificial intelligence">AI</abbr> life cycle and build understanding of <abbr title="artificial intelligence">AI</abbr> benefits as well as <abbr title="artificial intelligence">AI</abbr> risks. Respondents stated that, along with the government and regulators, education, consumer, and advocacy organisations should help make knowledge accessible.</p> <p>39. Some respondents wanted to see clearer routes for consumers to contest or seek redress for <abbr title="artificial intelligence">AI</abbr>-related harms. Some emphasised the importance of adequate data protection measures. A few respondents noted that <abbr title="artificial intelligence">AI</abbr> specific legislation would provide legal certainty and help foster public trust.</p> <p><strong>13.1. If so, should these activities be delivered by the government, regulators, or a different organisation?</strong></p> <p>40. While most respondents recommended that the government, regulators, industry, and civil society work together to help individuals and consumers confidently use <abbr title="artificial intelligence">AI</abbr> technologies, nearly half of respondents suggested that activities to improve consumer confidence in <abbr title="artificial intelligence">AI</abbr> should be delivered by the government.</p> <p><strong>14. How can we avoid overlapping, duplicative, or contradictory guidance on <abbr title="artificial intelligence">AI</abbr> issued by different regulators?</strong></p> <p>41. Many respondents suggested the proposed central functions would be the most effective mechanism to avoid overlapping, duplicative, or contradictory guidance. Respondents noted that the central functions would support regulators by identifying cross-sectoral risks, facilitating consistent risk management actions, providing guidance on cross-sectoral issues, and monitoring and evaluating the framework as a whole.</p> <p>42. While respondents stressed that consistent implementation of the framework across remits would require regulatory coordination, there was no agreement on the best way to achieve this. Some suggested establishing a new <abbr title="artificial intelligence">AI</abbr> regulator, a few proposed appointing an existing regulator as the ‘lead regulator’, and others endorsed voluntary regulatory coordination measures, emphasising the role of regulatory fora such as the Digital Regulation Cooperation Forum (<abbr title="Digital Regulation Cooperation Forum">DRCF</abbr>).</p> <p>43. Some respondents suggested that horizontal cross-sector standards and assurance techniques would encourage consistency across regulatory remits, sectors, and international jurisdictions. Respondents recommended clarifying the specific remits of each regulator in relation to <abbr title="artificial intelligence">AI</abbr> to promote coherence across the regulatory landscape. A few argued that introducing <abbr title="artificial intelligence">AI</abbr> legislation, including putting the <abbr title="artificial intelligence">AI</abbr> principles and regulatory coordination into statute, would prevent regulatory divergence.</p> <h3 id="monitoring-and-evaluation-of-the-framework-1">Monitoring and evaluation of the framework</h3> <p><strong>15. Do you agree with our overall approach to monitoring and evaluation?</strong></p> <p>44. Over half of respondents agreed with the overall approach to monitoring and evaluation set out in the <abbr title="artificial intelligence">AI</abbr> regulation white paper. Many commended the proposals for a feedback loop and advised that industry, regulators, and civil society should be engaged to help measure the effectiveness of the framework. Respondents broadly supported an iterative approach and some suggested consulting industry as part of a regular evaluation to assess and adapt the framework. A few respondents advocated for findings from framework evaluations to be publicly available.</p> <p>45. Some respondents stated that there was not enough detail or that the approach to monitoring and evaluation was unclear. To determine the practicality of the approach, respondents requested more information about the format, frequency, and sources of data that will be developed and used. Some of these respondents stressed the importance of identifying issues with the framework in a timely way. Respondents emphasised that <abbr title="artificial intelligence">AI</abbr> risks will need to be continuously monitored, noting that more clarity and transparency is needed on how risks will be escalated and addressed.</p> <p><strong>16. What is the best way to measure the impact of our framework?</strong></p> <p>46. Many respondents suggested a data driven approach to measuring the impact of the framework would be most effective. Respondents recommended qualitative and quantitative data collection, impact assessments, and key performance indicators (<abbr title="key performance indicators">KPIs</abbr>). Examples of possible <abbr title="key performance indicators">KPIs</abbr> included consumer trust and satisfaction, rate of innovation, time to market, complaints and adverse events, litigation, and compliance costs. A few respondents suggested using economic growth to measure the impact of the framework. A couple wanted to see measurements tailored to specific sectors and suggested that the government engage with regulators to understand how they measure regulatory impacts on their respective industries.</p> <p>47. Just over a quarter of respondents recommended that the government maintain a close dialogue with industry, civil society, and international partners. Respondents repeatedly stressed the importance of gathering a holistic view on impact with many noting that the government should engage with stakeholders who can offer different perspectives on the framework’s efficacy, including start-ups and small businesses. Respondents felt that broad consultation to gather evidence on public attitudes towards the framework and <abbr title="artificial intelligence">AI</abbr> more generally would also be useful.</p> <p>48. Respondents suggested that international interoperability should be monitored to ensure that the framework allows businesses to trade with and develop products for international markets. Some respondents suggested referencing established indicators and frameworks, such as the United Nations Sustainable Development Goals and the Five Capitals, to inform a set of qualitative and quantitative measures.</p> <p><strong>17. Do you agree that our approach strikes the right balance between supporting <abbr title="artificial intelligence">AI</abbr> innovation; addressing known, prioritised risks; and future-proofing the <abbr title="artificial intelligence">AI</abbr> regulation framework?</strong></p> <p>49. Half of respondents agreed that the approach strikes the right balance between supporting <abbr title="artificial intelligence">AI</abbr> innovation; addressing known, prioritised risks; and future-proofing the <abbr title="artificial intelligence">AI</abbr> regulation framework. However, some respondents were concerned that the approach would not be able to keep pace with the technological development of <abbr title="artificial intelligence">AI</abbr>, stating that adequate future proofing of the framework will depend on retaining flexibility and adaptability when implementing the principles. Respondents wanted greater clarity on the specific areas to be regulated and stressed that regulators need to be proactive in identifying the risk of harm.</p> <p>50. Over a third of respondents disagreed. Respondents were concerned that the framework does not clearly allocate responsibility for <abbr title="artificial intelligence">AI</abbr> outcomes. Some thought that the focus on <abbr title="artificial intelligence">AI</abbr> innovation, economic growth, and job creation would prevent a sufficient focus on <abbr title="artificial intelligence">AI</abbr>-related risks, such as bias and discrimination.</p> <h3 id="regulator-capabilities-1">Regulator capabilities</h3> <p><strong>18. Do you agree that regulators are best placed to apply the principles and the government is best placed to provide oversight and deliver central functions?</strong></p> <p>51. Nearly all respondents agreed that regulators are best placed to implement the principles and that the government is best placed to provide oversight and deliver the central functions.</p> <p>52. While respondents noted that regulators’ domain-specific expertise would be key to the effective tailoring of the cross-sectoral principles to sector needs, some also suggested that the government should support regulators to manage <abbr title="artificial intelligence">AI</abbr> risks within their remits by building their technical <abbr title="artificial intelligence">AI</abbr> skills and expertise.</p> <p>53. Some respondents argued that the government would need to work closely with regulators to provide effective oversight of the framework and delivery of the central functions. Some also endorsed further collaboration between regulators. A few felt that the government’s oversight of the framework should be open and transparent, advocating for input from industry and civil society.</p> <p>54. Some respondents were concerned that no current bodies were best placed to support the implementation and oversight of the proposed framework, with a few asking for <abbr title="artificial intelligence">AI</abbr> legislation and a new <abbr title="artificial intelligence">AI</abbr> regulator.</p> <p><strong>19. As a regulator, what support would you need in order to apply the principles in a proportionate and pro-innovation way?</strong></p> <p>55. While regulators that responded to this question supported the proposed framework, just over a quarter argued that the key challenge to proportionate and pro-innovation implementation would be coordination. Regulators saw value in sharing best practices to aid consistency and build existing knowledge into sector-specific approaches. Many suggested that strong mechanisms to share information between regulators and the proposed central functions would help avoid duplicate requirements across multiple regulators.</p> <p>56. Regulators that responded to this question reported inconsistent <abbr title="artificial intelligence">AI</abbr> capabilities, with over a quarter asking for further support in technical expertise and others demonstrating advanced approaches to addressing <abbr title="artificial intelligence">AI</abbr> within their remits. Regulators identified common capability gaps including a lack of technical <abbr title="artificial intelligence">AI</abbr> knowledge and limited understanding of where and how <abbr title="artificial intelligence">AI</abbr> is used by those they regulate. Some suggested that government support in building internal organisational capacity would help them to effectively apply the principles within their existing remits, with some noting that they struggle to compete with the private sector to recruit the right technical expertise and skills. A couple of regulators highlighted how initiatives such as the government-funded Regulators’ Pioneer Fund have already allowed them to develop approaches to responsible <abbr title="artificial intelligence">AI</abbr> innovation in their remits. Two regulators reported that the scope of their existing statutory remits and powers in relation to <abbr title="artificial intelligence">AI</abbr> is unclear. These regulators asked for further details on how the central function would ensure that regulators used their powers and remits in a coherent way as they apply the principles.</p> <p><strong>20. Do you agree that a pooled team of <abbr title="artificial intelligence">AI</abbr> experts would be the most effective way to address capability gaps and help regulators apply the principles?</strong></p> <p>57. Over three quarters of respondents agreed that a pooled team of <abbr title="artificial intelligence">AI</abbr> experts would be the most effective way to build common capability and address gaps. Respondents felt that a team of pooled <abbr title="artificial intelligence">AI</abbr> experts could help regulators to understand <abbr title="artificial intelligence">AI</abbr> and address its unique characteristics within their sectors, supporting the consistent application of the principles across remits.</p> <p>58. While respondents supported increasing regulators’ access to <abbr title="artificial intelligence">AI</abbr> expertise, many stressed that a pooled team would need to contain diverse and multi-disciplinary perspectives. Respondents felt the pooled team should bring together technical <abbr title="artificial intelligence">AI</abbr> expertise with sector-specific knowledge, industry specialists, and civil society to ensure that regulators are considering a broad range of views in their application of the principles.</p> <p>59. Some respondents stated that a pool of experts would be insufficient and suggested that in-house regulator capability with sector-specific expertise should be prioritised.</p> <h3 id="tools-for-trustworthy-ai-1">Tools for trustworthy <abbr title="artificial intelligence">AI</abbr> </h3> <p><strong>21. Which non-regulatory tools for trustworthy <abbr title="artificial intelligence">AI</abbr> would most help organisations to embed the <abbr title="artificial intelligence">AI</abbr> regulation principles into existing business processes?</strong></p> <p>60. There was strong support for the use of technical standards and assurance techniques, with respondents agreeing that both would help organisations to embed the <abbr title="artificial intelligence">AI</abbr> principles into existing business processes. Many respondents praised the UK <abbr title="artificial intelligence">AI</abbr> Standards Hub and the Centre for Data Ethics and Innovation’s (<abbr title="Centre for Data Ethics and Innovation">CDEI</abbr>) work on <abbr title="artificial intelligence">AI</abbr> assurance. While some respondents noted that businesses would have a smaller compliance burden if tools and processes were consistent across sectors, others noted the importance of additional sector-specific tools and processes. Respondents also suggested supplementing technical standards with case studies and examples of good practice.</p> <p>61. Respondents argued that standardised tools and techniques for identifying and mitigating potential risks related to <abbr title="artificial intelligence">AI</abbr> would also support organisations to embed the <abbr title="artificial intelligence">AI</abbr> principles. Some identified assurance techniques such as impact and risk assessments, model performance monitoring, model uncertainty evaluations, and red teaming as particularly helpful for identifying <abbr title="artificial intelligence">AI</abbr> risks. A few respondents recommended assurance techniques that can be used to detect and prevent issues such as drift to mitigate risks related to data. While commending the role of tools for trustworthy <abbr title="artificial intelligence">AI</abbr>, a few respondents also expressed a desire for more stringent regulatory measures, such as statutory requirements for high risk applications of <abbr title="artificial intelligence">AI</abbr> or a watchdog for foundation models.</p> <p>62. Respondents felt that tools and techniques such as fairness metrics, transparency reports, and organisational <abbr title="artificial intelligence">AI</abbr> ethics guidelines can support the responsible use of <abbr title="artificial intelligence">AI</abbr> while growing public trust in the technology. Respondents expressed the desire for third-party verification of <abbr title="artificial intelligence">AI</abbr> models through bias audits, consumer labelling schemes, and external certification against technical standards.</p> <p>63. A few respondents noted the benefits of international harmonisation across <abbr title="artificial intelligence">AI</abbr> governance approaches for both organisations and consumers. Some endorsed interoperable technical standards for <abbr title="artificial intelligence">AI</abbr>, commending international standards development organisations (<abbr title="Standards Development Organisations">SDOs</abbr>) such as the International Organisation for Standardisation (<abbr title="International Organization for Standardization">ISO</abbr>) and Institute of Electrical and Electronics Engineers (<abbr title="Institute of Electrical and Electronics Engineers">IEEE</abbr>). Others noted the strength of a range of international work on <abbr title="artificial intelligence">AI</abbr> including that by individual countries, such as the USA’s National Institute of Standards and Technology (<abbr title="National Institute of Standards and Technology">NIST</abbr>) <abbr title="artificial intelligence">AI</abbr> Risk Management Framework (<abbr title="Risk Management Framework">RMF</abbr>) and Singapore’s <abbr title="artificial intelligence">AI</abbr> Verify Foundation, along with work on international governance by multilateral bodies such as the Organisation for Economic Co-operation and Development (<abbr title="Organisation for Economic Co-operation and Development">OECD</abbr>), United Nation (<abbr title="United Nations">UN</abbr>), and <abbr title="Group of Seven">G7</abbr>.</p> <h3 id="final-thoughts-1">Final thoughts</h3> <p><strong>22. Do you have any other thoughts on our overall approach? Please include any missed opportunities, flaws, and gaps in our framework.</strong></p> <p>64. Some respondents felt that the <abbr title="artificial intelligence">AI</abbr> regulation framework set out in the white paper would benefit from more detailed guidance on <abbr title="artificial intelligence">AI</abbr>-related risks. Some wanted to see more stringent measures for severe risks, particularly related to the use of <abbr title="artificial intelligence">AI</abbr> in safety-critical contexts. Respondents suggested that the framework would be clearer if the government provided risk categories for certain uses of <abbr title="artificial intelligence">AI</abbr> such as law enforcement and places of work. Other respondents stressed that <abbr title="artificial intelligence">AI</abbr> can pose or accelerate significant risks related to privacy and data protection breaches, cyberattacks, electoral interference, misinformation, human rights infringements, environmental sustainability, and competition issues. A few respondents were concerned about the potential existential risk posed by <abbr title="artificial intelligence">AI</abbr>. Many respondents felt that <abbr title="artificial intelligence">AI</abbr> technologies are developing faster than regulatory processes.</p> <p>65. Respondents argued that the success of the framework relies on sufficient coordination between regulators in order to provide a clear and consistent approach to <abbr title="artificial intelligence">AI</abbr> across sectors and markets. Respondents also noted that different sectors face particular <abbr title="artificial intelligence">AI</abbr>-related benefits and risks, suggesting that the framework would need to balance the consistency provided by cross-sector requirements with the accuracy of sector-specific approaches. In particular, respondents flagged that any new rules or bodies to regulate <abbr title="artificial intelligence">AI</abbr> should build from the existing statutory remits of regulators and relevant regulatory standards. Respondents also noted that regulators would need to be adequately resourced with technical expertise and skills to implement the framework effectively.</p> <p>66. Respondents consistently emphasised that effective <abbr title="artificial intelligence">AI</abbr> regulation relies on international harmonisation. Respondents suggested that the UK should work towards an internationally aligned regulatory ecosystem for <abbr title="artificial intelligence">AI</abbr> by developing a gold standard framework and promoting best practice through key multilateral channels such as the <abbr title="Organisation for Economic Co-operation and Development">OECD</abbr>, <abbr title="United Nations">UN</abbr>, <abbr title="Group of Seven">G7</abbr>, and <abbr title="Group of 20">G20</abbr>. Respondents noted that divergent or overlapping approaches to regulating <abbr title="artificial intelligence">AI</abbr> would cause significant compliance burdens. Respondents argued that international cooperation can support responsible <abbr title="artificial intelligence">AI</abbr> innovation in the UK by creating clear and certain rules that allow investments to move across multiple markets. Respondents also suggested establishing bilateral working groups with key strategic partners to share expertise. Some respondents stressed that the UK’s pro-innovation approach should be delivered at pace to remain competitive with a fast moving international landscape.</p> <h3 id="legal-responsibility-for-ai-1">Legal responsibility for <abbr title="artificial intelligence">AI</abbr> </h3> <p><strong>L1. What challenges might arise when regulators apply the principles across different <abbr title="artificial intelligence">AI</abbr> applications and systems? How could we address these challenges through our proposed <abbr title="artificial intelligence">AI</abbr> regulatory framework?</strong></p> <p>67. Respondents felt that there were two core challenges for regulators applying the principles across different <abbr title="artificial intelligence">AI</abbr> applications and systems: a lack of clear legal responsibility across complicated <abbr title="artificial intelligence">AI</abbr> life cycles and issues with coordination across regulators and sectors.</p> <p>68. Over a quarter of respondents felt it was not clear who would be held liable for <abbr title="artificial intelligence">AI</abbr>-related risks. Some respondents raised a further concern about confusing interactions between the framework and existing legislation.</p> <p>69. While nearly half of respondents were concerned about coordination and consistency across sectors and regulatory remits, some indicated that a solution (and the strength of the framework) lay in a context-based approach. Respondents asked for sector-based guidance from regulators, compliance tools, and regulator engagement with industry.</p> <p>70. Many respondents suggested introducing statutory requirements or centralising the framework within a single organisational body, but there was no consensus over whether this centralisation should take the form of a lead regulator, central regulator, or coordination function. Some respondents suggested mandating industry transparency or third-party audits.</p> <p>71. Respondents also raised a lack of international standards and agreements as a challenge, pointing to the importance of international alignment and collaboration.</p> <p><strong>L2.i. Do you agree that the implementation of our principles through existing legal frameworks will fairly and effectively allocate legal responsibility for <abbr title="artificial intelligence">AI</abbr> across the life cycle?</strong></p> <p>72. While some respondents somewhat agreed that the principles would allocate legal responsibility for <abbr title="artificial intelligence">AI</abbr> fairly and effectively through existing legal frameworks, most respondents either disagreed or neither agreed nor disagreed. Many respondents stated that it is not clear how the <abbr title="artificial intelligence">AI</abbr> regulation principles would be implemented through existing legal frameworks. Respondents voiced concerns about gaps in existing legislation including intellectual property, legal services, and employment law. Some respondents stated that intellectual property rights needed to be affirmed and clarified to improve legal responsibility for <abbr title="artificial intelligence">AI</abbr>. A few respondents noted the need for the <abbr title="artificial intelligence">AI</abbr> framework to monitor and adapt as the technology advances and becomes more widely used. One respondent noted that the burden of liability falls at the deployer level and suggested that it would be essential to address information gaps in the <abbr title="artificial intelligence">AI</abbr> life cycle to improve the allocation of legal responsibility.</p> <p><strong>L.2.ii. How could it be improved, if at all?</strong></p> <p>73. Many respondents felt that the framework needed to further clarify liability across the <abbr title="artificial intelligence">AI</abbr> life cycle. In particular, respondents repeatedly noted the need for a legally responsible person for <abbr title="artificial intelligence">AI</abbr> and some suggested a model similar to Data Protection Officers.</p> <p>74. Over a quarter of respondents stated that new <abbr title="artificial intelligence">AI</abbr> legislation or regulator powers would be necessary to effectively allocate liability across the life cycle. Some named specific measures that would need statutory underpinning, with a few advocating for licensing and pre-approvals and a couple suggesting a moratorium on the most advanced <abbr title="artificial intelligence">AI</abbr>.</p> <p>75. Others felt that it would be best to clarify legal responsibility for <abbr title="artificial intelligence">AI</abbr> according to existing frameworks. Respondents wanted clarity on how the principles would be applied with or through existing law, with some suggesting that regulatory guidance would provide greater certainty.</p> <p>76. Respondents also suggested that non-statutory measures such as enhancing technical regulator capability, domestic and international standards, and assurance techniques would help fairly and effectively allocate legal responsibility across the <abbr title="artificial intelligence">AI</abbr> life cycle.</p> <p>77. Others noted that the proposed central functions, including risk assessment, horizon scanning, and monitoring and evaluation, would be key to ensuring that legal responsibility for <abbr title="artificial intelligence">AI</abbr> was fairly and effectively distributed across the life cycle as <abbr title="artificial intelligence">AI</abbr> capabilities advance and become increasingly used.</p> <p><strong>L3. If you are a business that develops, uses, or sells <abbr title="artificial intelligence">AI</abbr>, how do you currently manage <abbr title="artificial intelligence">AI</abbr> risk including through the wider supply chain? How could government support effective <abbr title="artificial intelligence">AI</abbr>-related risk management?</strong></p> <p>78. Nearly half of respondents to this question told us that they had implemented risk assessment processes for <abbr title="artificial intelligence">AI</abbr> within their organisation. Many used existing best practice processes and guidance from their sector or trade bodies such as techUK. Some felt that the proliferation of different organisational risk assessment processes reflected the absence of overarching guidance and best practice from the government. Of these respondents, many suggested that it would be useful for businesses to understand the government’s view on <abbr title="artificial intelligence">AI</abbr>-related best practices, with some recommending a central guide on using <abbr title="artificial intelligence">AI</abbr> safely.</p> <p>79. Many respondents noted their compliance with existing legal frameworks that capture <abbr title="artificial intelligence">AI</abbr>-related risks, such as product safety and personal data protections. Respondents highlighted that any future <abbr title="artificial intelligence">AI</abbr> measures should avoid duplicating or contradicting existing rules and laws.</p> <p>80. Respondents consistently stressed the importance of transparency, with some highlighting information sharing tools like model cards. Similarly to Q2, some respondents suggested that labelling <abbr title="artificial intelligence">AI</abbr> use would be beneficial to users, particularly in regard to building literacy around potentially malicious <abbr title="artificial intelligence">AI</abbr> generated content, such as deepfakes and disinformation. A few respondents argued that <abbr title="artificial intelligence">AI</abbr> labelling can help shape expectations of a service and should be a consumer protection. Echoing answers to F1, respondents also mentioned that services should be transparent about the data used to train <abbr title="artificial intelligence">AI</abbr> models so users can understand how tools and services work as well as their limitations.</p> <p>81. Responses showed that the size of an organisation shaped the capacity to assess <abbr title="artificial intelligence">AI</abbr>-related risks. While larger organisations mentioned that they engage with customers and suppliers to shape and share best practices, some smaller businesses asked for further support to assess <abbr title="artificial intelligence">AI</abbr>-related risk and implement the <abbr title="artificial intelligence">AI</abbr> principles effectively.</p> <h3 id="foundation-models-and-the-regulatory-framework-1">Foundation models and the regulatory framework</h3> <p><strong>F1. What specific challenges will foundation models such as large language models (<abbr title="large language models">LLMs</abbr>) or open-source models pose for regulators trying to determine legal responsibility for <abbr title="artificial intelligence">AI</abbr> outcomes?</strong></p> <p>82. While respondents supported the <abbr title="artificial intelligence">AI</abbr> regulation framework set out in the white paper, many were concerned that foundation models may warrant a bespoke regulatory approach. In particular, respondents noted that foundation models are characterised by their technical complexity and stressed their potential to underpin many different applications across multiple sectors. Nearly a quarter of respondents emphasised that foundation models make it difficult to determine legal responsibility for <abbr title="artificial intelligence">AI</abbr> outcomes, with some sharing hypothetical use-cases where both upstream and downstream actors are at fault. Respondents stressed that technical opacity, complex supply chains, and information asymmetries prevent sufficient explainability, accountability, and risk assessment for foundation models.</p> <p>83. Many respondents were concerned about the quality of the data used to train foundation models and whether training data is appropriate for all downstream model applications. Respondents stated that it was not clear whether data used to train foundation models complies with existing laws, such as those for data protection and intellectual property. Respondents noted that definitions and standards for training data were lacking. Respondents felt that data use could be improved through better information sharing measures, benchmark measurements and standards, and the clear allocation of responsibility to a specific actor or person for whether or not data is appropriate to a given application.</p> <p>84. Some respondents emphasised the complexity of foundation model supply chains and argued that information asymmetries between upstream developers (with technical oversight) and downstream deployers (with application oversight) not only muddies legal responsibility for <abbr title="artificial intelligence">AI</abbr> outcomes but also prevents sufficient risk monitoring and mitigation. While some respondents noted the concentrated market power of foundation model developers and suggested these actors were best positioned to mitigate related risks, others argued that developers would have limited sight of the risks linked to specific downstream applications. Many raised concerns about the lack of measures to rigorously judge the appropriateness of a foundation model to a given application.</p> <p>85. A few respondents noted concerns regarding wider access to <abbr title="artificial intelligence">AI</abbr>, including open source, leaking, or malicious use. However, a similar number of respondents noted the importance of open source to <abbr title="artificial intelligence">AI</abbr> innovation, transparency, and trust.</p> <p><strong>F2. Do you agree that measuring compute provides a potential tool that could be considered as part of the governance of foundation models?</strong></p> <p>86. Half of respondents felt compute was an inadequate proxy for governance requirements, with many arguing that the fast pace of technological change would mean compute-related thresholds would be quickly outdated. However, nearly half somewhat agreed that measuring compute would be useful for foundation model governance, suggesting that it could be used to assess whether a particular <abbr title="artificial intelligence">AI</abbr> model should follow certain requirements when used with other governance measures. A few respondents noted that measuring compute would be one way to capture the environmental impact of different <abbr title="artificial intelligence">AI</abbr> models.</p> <p><strong>F3. Are there other approaches to governing foundation models that would be more effective?</strong></p> <p>87. There was wide support for governance measures and tools for trustworthy <abbr title="artificial intelligence">AI</abbr>, with respondents advocating for the use of organisational governance, technical standards, and assurance techniques dedicated to foundation models.</p> <p>88. Some respondents recommended assessing foundation model capabilities and applications rather than compute. Respondents felt that model verification measures, such as audits and evaluations, would be effective, with some suggesting these should be mandatory requirements. Some respondents noted the importance of downstream monitoring or post-market surveillance. One respondent suggested a pre-deployment sandbox.</p> <p>89. A small number of respondents wanted to see statutory requirements on foundation models. A few endorsed moratoriums, bans, or limits on foundation models and uses. Others suggested using contracts, licences, and user agreements, with respondents also noting the importance of both physical and cyber security measures.</p> <h3 id="ai-sandboxes-and-testbeds-1"> <abbr title="artificial intelligence">AI</abbr> sandboxes and testbeds</h3> <p><strong>S1. Which of the sandbox models described in section 3.3.4 would be most likely to support innovation?</strong></p> <p>90. While a large majority of respondents were strongly supportive of sandboxes in general, the “multiple sector, multiple regulator” (<abbr title="Multiple Sector, Multiple Regulator">MSMR</abbr>) and “single sector, multiple regulator” (<abbr title="Single Sector, Multiple Regulator">SSMR</abbr>) models were seen to most likely support innovation.</p> <p>91. Over a third of respondents felt the <abbr title="Multiple Sector, Multiple Regulator">MSMR</abbr> model would support innovation, noting that the cross-sectoral basis would enable regulators to develop effective guidance in response to live issues, harmonise rules, coordinate implementation, ensure applicability to safety critical sectors, and identify complementary policy levers. Respondents suggested that a <abbr title="Multiple Sector, Multiple Regulator">MSMR</abbr> sandbox should tackle issues related to the implementation of the <abbr title="artificial intelligence">AI</abbr> principles, including identifying and addressing any gaps in the framework, overlap with existing regulation, coordination challenges between sectors and regulators, and any blockers to effective implementation of the regulatory framework, such as regulator capacity. Respondents also stressed that the sandbox should be flexible and adaptable in order to future proof against new technological developments.</p> <p>92. An equal number of respondents endorsed the <abbr title="Single Sector, Multiple Regulator">SSMR</abbr> model. Respondents noted that the <abbr title="Single Sector, Multiple Regulator">SSMR</abbr> and “multiple sector, single regulator” (MSSR) models would be easier to launch due to their more streamlined coordination across a single sector or regulator. For this reason, respondents felt that these models might drive the most immediate value. Some suggested that an initial single sector or single regulator sandbox could be adapted into a <abbr title="Multiple Sector, Multiple Regulator">MSMR</abbr> model as work progressed in order to capture the benefits of both models.</p> <p><strong>S2. What could the government do to maximise the benefit of sandboxes to <abbr title="artificial intelligence">AI</abbr> innovators?</strong></p> <p>93. Some respondents argued that the sandbox should be developed and delivered in collaboration with businesses, regulators, consumer groups, and academics and other experts. Respondents suggested building on the existing strengths of the UK regulatory landscape, such as facilitating cross-sector learnings through the Digital Regulation Cooperation Forum (<abbr title="Digital Regulation Cooperation Forum">DRCF</abbr>).</p> <p>94. Respondents stated that the sandbox should develop guidance, share information and tools, and provide support to <abbr title="artificial intelligence">AI</abbr> innovators. In particular, respondents said that information about opportunities for involvement should be shared and noted that sharing outcomes would encourage wider participation. Respondents wanted the sandbox to be open and transparent, with many advocating for sandbox processes, regulatory assessments and reports, decision processes, evidence reviews, and subsequent findings to be made available to the public. Respondents suggested that regular reports and guidance from the sandbox would inform innovators and future regulation by creating “business-as-usual” processes. Respondents felt that measures should be taken to make the sandbox as accessible as possible, with a few advocating for dedicated pathways and training for smaller businesses.</p> <p>95. Respondents felt that the sandbox should be used to inform and develop technical standards and assurance techniques that can be widely used. A few mentioned that this would help promote best practice across industry. Others noted that, to be most beneficial, the sandbox should be well aligned with wider regulation for <abbr title="artificial intelligence">AI</abbr>. Respondents also noted that a sandbox presents an opportunity for the UK to demonstrate global leadership in <abbr title="artificial intelligence">AI</abbr> regulation and technical standards by sharing findings and best practices internationally.</p> <p>96. Respondents noted that the sandbox could support innovation by providing market advantages, such as product certification, to maximise the benefits to <abbr title="artificial intelligence">AI</abbr> innovators. Other financial incentives suggested by respondents included innovation grants, tax credits, and free or funded participation in supervised test environment sandboxes. A few stakeholders agreed that funding would help start-ups and smaller businesses with less organisational resources to participate in research and development focused sandboxes. Respondents suggested that the sandbox could collaborate with UK and international investment companies to build opportunities for participating companies.</p> <p><strong>S3. What could the government do to facilitate participation in an <abbr title="artificial intelligence">AI</abbr> regulatory sandbox?</strong></p> <p>97. Some respondents suggested that grants, subsidies, and tax credits would encourage participation by smaller businesses and start-ups in resource-intensive, research and development focused sandbox models such as supervised test environments.</p> <p>98. Respondents endorsed a range of incentives to facilitate participation in different sandbox models including access to standardised and anonymised datasets, and accreditation schemes that would show alignment with regulatory requirements and help gain market access. There was some support for innovation competitions that would help select participants.</p> <p>99. Similarly to S2, respondents agreed that collaboration and consultation with a range of stakeholders would help facilitate broad participation. Respondents suggested research centres, accelerator programmes, and university partnerships. There was support for a diverse group of stakeholders to be involved in the early stages of sandbox development, especially to identify regulatory areas with high risk. There was some support for harmonised evaluation frameworks across sectors to reduce regulatory burden and encourage wider interest from prospective stakeholders. One respondent proposed a dedicated online platform that would provide access to relevant guidance and provide a portal for submitting and tracking applications along with a community forum.</p> <p>100. There was broad support for a simple application process with clear guidelines, templates, and information on eligibility and legal requirements. Respondents expressed support for clear entry and exit criteria, noting the importance of reducing the administrative burden on smaller businesses and start-ups to lower the barrier to entry.</p> <p><strong>S4. Which industry sectors or classes of product would most benefit from an <abbr title="artificial intelligence">AI</abbr> sandbox?</strong></p> <p>101. While there was no overall consensus on a specific sector or class of product that would most benefit from an <abbr title="artificial intelligence">AI</abbr> sandbox, respondents identified two “safety-critical” sectors with a high-degree of potential risk: healthcare and transport. Respondents noted that these sectors are characterised by an inability for real-world testing and would benefit from an <abbr title="artificial intelligence">AI</abbr> sandbox. Respondents noted the potential to enhance healthcare outcomes, patient safety, and compliance with patient privacy guidelines by fostering innovation in areas such as diagnostic tools, personalised medicine, drug discovery, and medical devices. Other respondents noted the rise of autonomous vehicles and intelligent transportation systems along with significant enthusiasm from industry to test the regulatory framework.</p> <p>102. Some respondents suggested that financial services and insurance would benefit from an <abbr title="artificial intelligence">AI</abbr> sandbox due to heavy investment from the sector in automation and <abbr title="artificial intelligence">AI</abbr>. Respondents also noted that financial services and insurance are also overseen by multiple regulators, including the Information Commissioner’s Office (<abbr title="Information Commissioner's Office">ICO</abbr>), Prudential Regulation Authority (<abbr title="Prudential Regulation Authority">PRA</abbr>), Financial Conduct Authority (<abbr title="Financial Conduct Authority">FCA</abbr>), and The Pensions Regulator (<abbr title="The Pensions Regulator">TPR</abbr>). Respondents noted that financial services could leverage an <abbr title="artificial intelligence">AI</abbr> sandbox to explore <abbr title="artificial intelligence">AI</abbr>-based applications for risk assessment, fraud detection, algorithmic trading, and customer service.</p> <p>103. It was noted by one respondent that the nuclear sector is currently already benefiting from an <abbr title="artificial intelligence">AI</abbr> sandbox. The Office for Nuclear Regulation (<abbr title="Office for Nuclear Regulation">ONR</abbr>) and the Environment Agency (<abbr title="Environment Agency">EA</abbr>) have taken the learnings from their own regulatory sandbox to develop the concept of an international <abbr title="artificial intelligence">AI</abbr> sandbox for the nuclear sector.</p> <h2 id="annex-d-summary-of-impact-assessment-evidence">Annex D: Summary of impact assessment evidence</h2> <p>This annex provides a summary of the written evidence we received in response to our consultation on the <abbr title="artificial intelligence">AI</abbr> regulation impact assessment<sup id="fnref:124" role="doc-noteref"><a href="#fn:124" class="govuk-link" rel="footnote">[footnote 124]</a></sup>. We asked eight questions including seven open or semi-open questions that received a range of written reflections. We asked:</p> <ol> <li>Do you agree that the rationale for intervention comprehensively covers and evidences current and future harms?</li> <li>Do you agree that increased trust is a significant driver of demand for <abbr title="artificial intelligence">AI</abbr> systems?</li> <li>Do you have any additional evidence to support the following estimates and assumptions across the framework?</li> <li>Do you agree with the estimates associated with the central functions?</li> <li>Are you aware of any alternative metrics to measure the policy objectives?</li> <li>Do you believe that some <abbr title="artificial intelligence">AI</abbr> systems would be prohibited in Options 1 and 2, due to increased regulatory scrutiny?</li> <li>Do you agree with our assessment of each policy option against the objectives?</li> <li>Do you have any additional evidence that proves or disproves our analysis in the impact assessment?</li> </ol> <p>In total we received 64 written responses on the impact assessment consultation from organisations and individuals. The method of our analysis is captured in Annex A and a summary of responses to these questions follows below. </p> <p><strong>Question 1: Do you agree that the rationale for intervention comprehensively covers and evidences current and future harms?</strong></p> <p>Summary of responses:</p> <p>More than half of respondents disagreed that the rationale for intervention comprehensively covers evidence of current and future harms. Nearly half of respondents stated that not all risks are adequately addressed. Many of these respondents argued that the rationale does not account for unexpected harms or existential and systemic risks. One respondent argued that the rationale does not consider the impact of <abbr title="artificial intelligence">AI</abbr> on human rights. Another respondent suggested that there should be mandatory requirements for the ethical collection of data and another advocated for pre-deployment measures to mitigate <abbr title="artificial intelligence">AI</abbr> risks.</p> <p>Over a quarter of respondents suggested analysing risks and opportunities for each sector. These respondents often argued that the potential harms and benefits in different industries are not accounted for, such as the impact of <abbr title="artificial intelligence">AI</abbr> on jobs.</p> <p>Some respondents advocated for the government to build the evidence on current and future harms as well as potential interventions. Many of these respondents emphasised the importance of including diverse perspectives and the public voice when conducting research and regulating <abbr title="artificial intelligence">AI</abbr>.</p> <p>A few respondents noted that the government and regulators should adopt a flexible approach that monitors and can adapt to technological developments.</p> <p>A few respondents stated that excessive regulation and government intervention will stifle innovation instead of encouraging it. These respondents argued that there needs to be a balance between mitigating risks and enabling the benefits of <abbr title="artificial intelligence">AI</abbr>.</p> <p>One respondent stated that there should be an independent regulator for <abbr title="artificial intelligence">AI</abbr>.</p> <p><strong>Question 2: Do you agree that increased trust is a significant driver of demand for <abbr title="artificial intelligence">AI</abbr> systems?</strong></p> <p>Summary of responses:</p> <p>Over half of respondents agreed that trust is a significant driver of demand for <abbr title="artificial intelligence">AI</abbr> systems. However, around a quarter disagreed and some remained unsure.</p> <p>Over a third of respondents gave a written answer that could provide further insight outside of agreeing or disagreeing. Of these, many respondents stressed that transparency, education, and governance measures (such as regulation and technical standards) increase trust. These ideas were reflected in both respondents who agreed and disagreed in trust driving demand for <abbr title="artificial intelligence">AI</abbr>.</p> <p>Respondents also argued that trust in <abbr title="artificial intelligence">AI</abbr> could be reduced by concerns about bias or safety. Some of these respondents highlighted that unfair or untransparent bias in <abbr title="artificial intelligence">AI</abbr> systems not only reduces trust but impacts already marginalised communities the most. Some respondents argued that prioritising innovation over trust in a regulatory approach would reduce trust.</p> <p>Of the respondents that disagreed that trust was a driver of <abbr title="artificial intelligence">AI</abbr> uptake and provided further written responses, two main themes emerged. First, that demand for <abbr title="artificial intelligence">AI</abbr> is driven by economic and financial incentives and, second, that it is driven by technological developments. For example, one respondent highlighted that <abbr title="artificial intelligence">AI</abbr> could increase productivity and thus the profitability of companies. Respondents also highlighted technological developments as a driver for <abbr title="artificial intelligence">AI</abbr> demand, with two respondents stating that companies’ “fear of missing out” in new technologies could drive their demand for <abbr title="artificial intelligence">AI</abbr> systems. </p> <p>Respondents that disagreed often suggested that increasing <abbr title="artificial intelligence">AI</abbr> demand and adoption comes at the cost of safeguarding the public and risk mitigation.</p> <p><strong>Question 3: Do you have any additional evidence to support the following estimates and assumptions across the framework?</strong></p> <p>Summary of responses:</p> <p>Respondents reacted to each statement differently. There was a mixed amount of agreement across all statements. In written feedback, some respondents suggested that our estimates and assumptions depend on complex factors or that it is not possible to provide estimates about <abbr title="artificial intelligence">AI</abbr> due to too many uncertainties.</p> <p>For the first estimation, that 431,671 businesses will be impacted by adopting/consuming <abbr title="artificial intelligence">AI</abbr> less than the estimated 3,170 businesses supplying/producing <abbr title="artificial intelligence">AI</abbr>, disagreeing respondents found that it understates the number of businesses that will likely be affected by <abbr title="artificial intelligence">AI</abbr>, that the number can rapidly change as it is easy to integrate <abbr title="artificial intelligence">AI</abbr> into a product or service, that the division between <abbr title="artificial intelligence">AI</abbr> adopters and producers is somewhat artificial, and that consumers should also be considered. </p> <p>For the second statement, saying that those who adopt/consume <abbr title="artificial intelligence">AI</abbr> products and services will face lower costs than those who produce and/or supply <abbr title="artificial intelligence">AI</abbr> solutions products and services, there was some disagreement and one response that agreed. Those who disagreed with the statement argued that consumers of <abbr title="artificial intelligence">AI</abbr> will have lower costs than producers of <abbr title="artificial intelligence">AI</abbr> since consumers and users more widely can face (increasing) costs of using <abbr title="artificial intelligence">AI</abbr> applications. On the other hand, one respondent mentioned that cost savings will apply to users without a deep understanding of the technology and producers will face high salary costs because of a small pool of labour talent able to operate advanced <abbr title="artificial intelligence">AI</abbr> systems.</p> <p>Concerning the third estimate of familiarisation costs (here referring to the cost of businesses upskilling employees in new regulation) landing in the range of £2.7 million to £33.7 million, a couple of respondents that disagreed stated that familiarisation costs could vary from business to business. These respondents argued the current range was understating the full costs and recommended considering other costs. Some suggested that consumers need to be trained on residual risk and how to overcome automation bias. Others mentioned that the independent audit of <abbr title="artificial intelligence">AI</abbr> systems will create many new highly-trained jobs.</p> <p>Finally, on the fourth estimation that compliance costs (here reflecting the cost of businesses adjusting business elements to comply with new standards) will land in the range of  £107 million to £6.7 billion, there was further disagreement. Some respondents said that compliance costs should be as low as possible, but there was no agreement on how best to achieve this. Other respondents stated that companies will not comply and that compliance would necessitate new business activities.</p> <p><strong>Question 4: Do you agree with the estimates associated with the central functions?</strong></p> <p>Summary of responses:</p> <p>A slight majority of respondents somewhat disagreed with the estimates outlined in the <abbr title="artificial intelligence">AI</abbr> regulation impact assessment, suggesting that central function estimates are too high. Some respondents mentioned that the central function could deploy <abbr title="artificial intelligence">AI</abbr> and use automation to harness efficiency and drive down cost estimates. Two respondents also highlighted that the central function could employ techniques such as peer-to-peer learning and networks to drive down cost estimates.</p> <p>On the other hand, some respondents indicated that central function estimates are too low. Some respondents believe that the current estimates are too low because they do not account for costs associated with late upskilling of central function employees. One respondent suggested that the increasing demand for <abbr title="artificial intelligence">AI</abbr> from the commercial sector would raise costs further, and create challenges in the central function accessing <abbr title="artificial intelligence">AI</abbr> solutions due to inflationary cost pressure. Some respondents suggested that the expanding scale and capabilities of <abbr title="artificial intelligence">AI</abbr> would require a larger central function to regulate the technology, arguing current costs are likely to be conservative estimates.</p> <p>A few respondents did agree that the estimates are accurate. However, many noted that it would be a challenge to pin a specific number to the estimates associated with the central function, and suggested that a lack of clarity in defining terms made it difficult to assess accuracy of the estimates.</p> <p><strong>Question 5: Are you aware of any alternative metrics to measure the policy objectives?</strong></p> <p>Summary of responses:</p> <p>More than a third of respondents suggested alternative metrics that could be used to measure the policy objectives. Some suggestions included tracking the number of models being audited for bias and fairness; the number of <abbr title="artificial intelligence">AI</abbr>-related incidents being reported and investigated; and metrics related to the framework’s operation such as the number of regulators publishing guidance, the nature of guidance and associated outcomes for organisations that have adopted it, or sentiment indicators from stakeholders. Other suggestions included tracking public trust and acceptance of <abbr title="artificial intelligence">AI</abbr> systems.</p> <p>Almost a quarter of respondents suggested existing frameworks and models. A couple of respondents suggested that effective assessment and regulation of harm would be key to measuring the policy objectives.</p> <p><strong>Question 6: Do you believe that some <abbr title="artificial intelligence">AI</abbr> systems would be prohibited in options 1 and 2 due to increased regulatory scrutiny?</strong></p> <p>Summary of responses:</p> <p>Over half of respondents agreed that some <abbr title="artificial intelligence">AI</abbr> systems would be prohibited in options 1 and 2 due to increased regulatory scrutiny. Around a quarter of respondents disagreed and just under a third were unsure.</p> <p>Of respondents that expanded on their thoughts, a third suggested that some <abbr title="artificial intelligence">AI</abbr> systems present a threat to society and should be prohibited. These respondents emphasised that prohibition would reduce <abbr title="artificial intelligence">AI</abbr> risks and saw prohibition as a positive impact. Some suggested that a lack of any prohibition would represent a failure of the regulatory framework.</p> <p>Some stakeholders suggested that some <abbr title="artificial intelligence">AI</abbr> systems would be prohibited. However, a similar amount suggested that the regulatory scrutiny under options 1 and 2 would not be sufficient enough to prohibit <abbr title="artificial intelligence">AI</abbr> systems. These two sets of responses reflected conflicting understanding around the intensity of the proposed regulations, as opposed to inherent views on how regulation might impact the sector. A few indicated that the impact assessment was unable to provide enough evidence around which <abbr title="artificial intelligence">AI</abbr> systems might be prohibited.</p> <p><strong>Question 7: Do you agree with our assessment of each policy option against the objectives?</strong></p> <p>Summary of responses:</p> <p>Just over a third of respondents either strongly or somewhat agreed with the assessment of each policy option against objectives, with most responding that they somewhat agree. A similar amount either strongly or somewhat disagreed, with most of these responding that they only somewhat disagreed. Around a quarter of respondents neither agreed nor disagreed, or indicated they were unsure.   </p> <p><strong>Question 8: Do you have any additional evidence that proves or disproves our analysis in the impact assessment?</strong></p> <p>Summary of responses:</p> <p>Almost half of written responses suggested that the <abbr title="artificial intelligence">AI</abbr> regulation impact assessment insufficiently estimated the impacts of <abbr title="artificial intelligence">AI</abbr>. These respondents indicated that the impacts of <abbr title="artificial intelligence">AI</abbr> are much larger and more harmful than is implied by the <abbr title="artificial intelligence">AI</abbr> regulation impact assessment and white paper.</p> <p>Just under a third indicated that the government should act quickly to regulate emerging <abbr title="artificial intelligence">AI</abbr> technologies. These respondents emphasised that timely action should be a key focus for <abbr title="artificial intelligence">AI</abbr> regulation given the quickly advancing capabilities of the technology.</p> <p>Some respondents indicated that there was too great a degree of uncertainty to make accurate assessments. These respondents thought that any estimation would be inaccurate due to the nature of <abbr title="artificial intelligence">AI</abbr> and the many uncertainties around future developments.</p> <p>Some respondents suggested that regulators should harmonise their approach to <abbr title="artificial intelligence">AI</abbr>, emphasising that the use of these technologies across sectors requires coordinated and consistent regulation.</p> <div class="footnotes" role="doc-endnotes"> <ol> <li id="fn:1" role="doc-endnote"> <p><a rel="external" href="https://www.trade.gov/market-intelligence/united-kingdom-artificial-intelligence-market-2023" class="govuk-link">United Kingdom Artificial Intelligence Market</a>, US International Trade Administration, 2023. <a href="#fnref:1" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:2" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper" class="govuk-link">Frontier <abbr title="artificial intelligence">AI</abbr>: capabilities and risks</a>, Department for Science, Innovation and Technology, 2023. <a href="#fnref:2" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:3" role="doc-endnote"> <p><a href="https://www.gov.uk/government/news/new-advisory-service-to-help-businesses-launch-ai-and-digital-innovations" class="govuk-link">New advisory service to help businesses launch <abbr title="artificial intelligence">AI</abbr> and digital innovations</a>, Department for Science, Innovation and Technology, 2023. <a href="#fnref:3" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:4" role="doc-endnote"> <p>To support the government’s planning and policy development, and given the material uncertainties that exist, the Government Office for Science has prepared a foresight report outlining possible scenarios that may arise in the context of <abbr title="artificial intelligence">AI</abbr> development, proliferation and impact in 2030. See: <a href="https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper/future-risks-of-frontier-ai-annex-a" class="govuk-link">Future risks of frontier <abbr title="artificial intelligence">AI</abbr> (Annex A),</a> Government Office for Science, 2023. A full report on the scenarios will be published shortly (this report will not be a statement of government policy). <a href="#fnref:4" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:5" role="doc-endnote"> <p><a href="https://www.gov.uk/government/speeches/prime-ministers-speech-on-ai-26-october-2023" class="govuk-link">Prime Minister’s speech on <abbr title="artificial intelligence">AI</abbr>: 26 October 2023</a>, Prime Minister’s Office, 10 Downing Street, 2023. <a href="#fnref:5" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:6" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/international-science-partnerships-fund-ispf" class="govuk-link">International Science Partnerships Fund</a>, <abbr title="UK Research and Innovation">UKRI</abbr>, 2023. <a href="#fnref:6" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:7" role="doc-endnote"> <p><a rel="external" href="https://openai.com/blog/how-should-ai-systems-behave" class="govuk-link">How should <abbr title="artificial intelligence">AI</abbr> systems behave, and who should decide?</a>, OpenAI, 2023. <a href="#fnref:7" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:8" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach" class="govuk-link">Safety and Security Risks of Generative Artificial Intelligence to 2025</a>, Government Office for Science, 2023. <a href="#fnref:8" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:9" role="doc-endnote"> <p>We provide further detail on this area as part of our description of the cross-sectoral safety, security and robustness principle in the <abbr title="artificial intelligence">AI</abbr> regulation white paper. See: <a href="https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach" class="govuk-link"><abbr title="artificial intelligence">AI</abbr> regulation: a pro-innovation approach,</a> Department for Science, Innovation and Technology, 2023. <a href="#fnref:9" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:10" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper" class="govuk-link">Frontier <abbr title="artificial intelligence">AI</abbr>: capabilities and risks</a>, Department for Science, Innovation and Technology, 2023. <a href="#fnref:10" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:11" role="doc-endnote"> <p>Large dedicated <abbr title="artificial intelligence">AI</abbr> companies make a major contribution to the UK economy, with <abbr title="gross value added">GVA</abbr> (gross value added) per employee estimated to be £400,000, more than double that of comparable estimates of large dedicated firms in other sectors. See: <a href="https://www.gov.uk/government/publications/artificial-intelligence-sector-study-2022" class="govuk-link"><abbr title="artificial intelligence">AI</abbr> Sector Study 2022</a>, Department for Science, Innovation and Technology, 2023. <a href="#fnref:11" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:12" role="doc-endnote"> <p><a rel="external" href="https://www.tortoisemedia.com/intelligence/global-ai/" class="govuk-link">The Global <abbr title="artificial intelligence">AI</abbr> Index</a> Tortoise Media, 2023. <a href="#fnref:12" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:13" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach" class="govuk-link"><abbr title="artificial intelligence">AI</abbr> regulation: a pro-innovation approach</a>, Department for Science, Innovation and Technology, 2023. <a href="#fnref:13" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:14" role="doc-endnote"> <p><a href="https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals" class="govuk-link"><abbr title="artificial intelligence">AI</abbr> regulation: a pro-innovation approach – policy proposals</a>, Department for Science, Innovation and Technology, 2023. <a href="#fnref:14" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:15" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper" class="govuk-link">Frontier <abbr title="artificial intelligence">AI</abbr>: capabilities and risks</a>, Department for Science, Innovation, and Technology, 2023. <a href="#fnref:15" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:16" role="doc-endnote"> <p><a rel="external" href="https://www.reuters.com/technology/race-towards-autonomous-ai-agents-grips-silicon-valley-2023-07-17/" class="govuk-link">Race towards ‘autonomous’ <abbr title="artificial intelligence">AI</abbr> agents grips Silicon Valley</a>, Anna Tong and Jeffrey Dastin, 2023; <a rel="external" href="https://openai.com/blog/introducing-superalignment" class="govuk-link">Introducing superalignment</a>, Jan Leike and Ilya Sutskever (OpenAI), 2023; <a rel="external" href="https://deepmind.google/about/" class="govuk-link"><abbr title="artificial intelligence">AI</abbr> could be one of humanity’s most useful inventions</a>, Google Deepmind, n.d.. <a href="#fnref:16" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:17" role="doc-endnote"> <p><a rel="external" href="https://www.oecd.org/employment-outlook/2023/#ai-jobs" class="govuk-link">Employment Outlook 2023: artificial intelligence and jobs</a>, <abbr title="Organisation for Economic Co-operation and Development">OECD</abbr>, 2023. <a href="#fnref:17" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:18" role="doc-endnote"> <p><a rel="external" href="https://assets.kpmg.com/content/dam/kpmg/uk/pdf/2023/06/generative-ai-and-the-uk-labour-market.pdf" class="govuk-link">Generative <abbr title="artificial intelligence">AI</abbr> and the UK labour market,</a> KPMG, 2023; <a rel="external" href="https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#key-insights" class="govuk-link">The economic potential of generative <abbr title="artificial intelligence">AI</abbr>: the next productivity frontier</a>, McKinsey, 2023; <a rel="external" href="https://www.ifow.org/publications/adoption-of-ai-in-uk-firms-and-the-consequences-for-jobs" class="govuk-link">What drives UK firms to adopt <abbr title="artificial intelligence">AI</abbr> and robotics, and what are the consequences for jobs?,</a> Institute for the Future of Work, 2023. <a href="#fnref:18" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:19" role="doc-endnote"> <p><a rel="external" href="https://www.forbes.com/sites/cindygordon/2023/02/02/chatgpt-is-the-fastest-growing-ap-in-the-history-of-web-applications/" class="govuk-link">ChatGPT is the fastest growing app in the history of web applications</a>, Cindy Gordon, 2023. <a href="#fnref:19" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:20" role="doc-endnote"> <p><a rel="external" href="https://www.zsl.org/news-and-events/feature/using-ai-monitor-trackside-britains-wildlife" class="govuk-link">Using <abbr title="artificial intelligence">AI</abbr> to monitor trackside Britain’s wildlife</a>, Zoological Society London, 2023. <a href="#fnref:20" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:21" role="doc-endnote"> <p><a rel="external" href="https://www.nature.com/articles/s41586-023-06555-x" class="govuk-link">A foundation model for generalizable disease detection from retinal images</a>, Esma Aïmeur et al., 2023. <a href="#fnref:21" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:22" role="doc-endnote"> <p><a rel="external" href="https://dl.acm.org/doi/pdf/10.1145/3544548.3581318" class="govuk-link">Synthetic lies: understanding <abbr title="artificial intelligence">AI</abbr>-generated misinformation and evaluating algorithmic and human solutions</a>, Jiawei Zhou et al., 2023; <a rel="external" href="https://link.springer.com/article/10.1007/s13278-023-01028-5#Sec16" class="govuk-link">Fake news, disinformation and misinformation in social media: a review,</a> Yukon Zhou et al., 2023; <a rel="external" href="https://arxiv.org/pdf/2306.12807.pdf" class="govuk-link"><abbr title="artificial intelligence">AI</abbr> could create a perfect storm of climate misinformation</a>, Victor Galaz et al., 2023. <a href="#fnref:22" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:23" role="doc-endnote"> <p><a rel="external" href="https://www.nature.com/articles/s42256-022-00465-9" class="govuk-link">Dual use of artificial-intelligence-powered drug discovery</a>, Fabio Urbina et al., 2022. <a href="#fnref:23" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:24" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach" class="govuk-link"><abbr title="artificial intelligence">AI</abbr> regulation: a pro-innovation approach</a>, Department for Science, Innovation and Technology, 2023. <a href="#fnref:24" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:25" role="doc-endnote"> <p><a href="https://www.gov.uk/cma-cases/ai-foundation-models-initial-review" class="govuk-link"><abbr title="artificial intelligence">AI</abbr> Foundation Models: initial review</a>, <abbr title="Competition and Markets Authority">CMA</abbr>, 2023. <a href="#fnref:25" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:26" role="doc-endnote"> <p><a rel="external" href="https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-do-we-ensure-fairness-in-ai/" class="govuk-link">How do we ensure fairness in <abbr title="artificial intelligence">AI</abbr>?</a>, <abbr title="Information Commissioner's Office">ICO</abbr>, 2023. <a href="#fnref:26" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:27" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/software-and-ai-as-a-medical-device-change-programme/software-and-ai-as-a-medical-device-change-programme-roadmap" class="govuk-link">Software and <abbr title="artificial intelligence">AI</abbr> as a Medical Device Change Programme – Roadmap</a>, <abbr title="Medicines and Healthcare products Regulatory Agency">MHRA</abbr>, updated 2023 [2021]. <a href="#fnref:27" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:28" role="doc-endnote"> <p>The government has written to the Office of Communications (<abbr title="Office of Communications">Ofcom</abbr>); Information Commissioner’s Office (<abbr title="Information Commissioner's Office">ICO</abbr>); Financial Conduct Authority (<abbr title="Financial Conduct Authority">FCA</abbr>); Competition and Markets Authority (<abbr title="Competition and Markets Authority">CMA</abbr>); Equality and Human Rights Commission (<abbr title="Equality and Human Rights Commission">EHRC</abbr>); Medicines and Healthcare products Regulatory Agency (<abbr title="Medicines and Healthcare products Regulatory Agency">MHRA</abbr>); Office for Standards in Education, Children’s Services and Skills (<abbr title="Office for Standards in Education, Children’s Services and Skills">Ofsted</abbr>); Legal Services Board (<abbr title="Legal Services Board">LSB</abbr>); Office for Nuclear Regulation (<abbr title="Office for Nuclear Regulation">ONR</abbr>); Office of Qualifications and Examinations Regulation (<abbr title="Office of Qualifications and Examinations Regulation">Ofqual</abbr>); Health and Safety Executive (<abbr title="Health and Safety Executive">HSE</abbr>); Bank of England; and Office of Gas and Electricity Markets (<abbr title="Office of Gas and Electricity Markets">Ofgem</abbr>). The Office for Product Safety and Standards (<abbr title="Office for Product Safety and Standards">OPSS</abbr>), which sits within the Department for Business and Trade, has also been asked to produce an update. Regulators will be best placed to determine the form and substance of their update and we encourage all regulators that consider <abbr title="artificial intelligence">AI</abbr> to be relevant to their work to publish their approaches. As we continue to implement the framework and assess regulator readiness, our prioritisation of regulators may change to reflect evolving factors such as our risk analysis. We will also work with other regulators and encourage the publication of action plans to drive transparency across the wider ecosystem. <a href="#fnref:28" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:29" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/pro-innovation-regulation-of-technologies-review-cross-cutting" class="govuk-link">Response to Professor Dame Angela McLean’s Pro-Innovation Regulation of Technologies Review: Cross Cutting</a>, HM Treasury, 2023. <a href="#fnref:29" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:30" role="doc-endnote"> <p><a rel="external" href="https://www.ukri.org/news/250m-to-secure-the-uks-world-leading-position-in-technologies-of-tomorrow/" class="govuk-link">£250 million to secure the UK’s world-leading position in technologies of tomorrow</a>, <abbr title="UK Research and Innovation">UKRI</abbr>, 2023. <a href="#fnref:30" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:31" role="doc-endnote"> <p>Members of the <abbr title="Digital Regulation Cooperation Forum">DRCF</abbr> include the <abbr title="Competition and Markets Authority">CMA</abbr>, <abbr title="Information Commissioner's Office">ICO</abbr>, <abbr title="Financial Conduct Authority">FCA</abbr>, and <abbr title="Office of Communications">Ofcom</abbr>. See: <a href="https://www.gov.uk/government/news/new-advisory-service-to-help-businesses-launch-ai-and-digital-innovations" class="govuk-link">New advisory service to help businesses launch <abbr title="artificial intelligence">AI</abbr> and digital innovations</a>, Department for Science, Innovation and Technology, 2023. <a href="#fnref:31" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:32" role="doc-endnote"> <p><a rel="external" href="https://aistandardshub.org/" class="govuk-link">The <abbr title="artificial intelligence">AI</abbr> Standards Hub</a>, <abbr title="artificial intelligence">AI</abbr> Standards Hub, 2022. <a href="#fnref:32" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:33" role="doc-endnote"> <p><a href="https://www.gov.uk/guidance/cdei-portfolio-of-ai-assurance-techniques" class="govuk-link">Portfolio of <abbr title="artificial intelligence">AI</abbr> Assurance Techniques</a>, Department for Science, Innovation and Technology, 2023. <a href="#fnref:33" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:34" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-3" class="govuk-link">Public attitudes to data and <abbr title="artificial intelligence">AI</abbr>: Tracker survey (Wave 3)</a>, Department for Science, Innovation and Technology, 2023. <a href="#fnref:34" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:35" role="doc-endnote"> <p>We have previously categorised these as societal harms; misuse risks; and loss of control. See: <a href="https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper" class="govuk-link">Frontier <abbr title="artificial intelligence">AI</abbr>: capabilities and risks</a>, Department for Science, Innovation and Technology, 2023. <a href="#fnref:35" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:36" role="doc-endnote"> <p><a rel="external" href="https://braiduk.org/" class="govuk-link">Bridging Responsible <abbr title="artificial intelligence">AI</abbr> Divides</a>, <abbr title="Bridging Responsible AI Divides">BRAID</abbr> UK, 2024. <a href="#fnref:36" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:37" role="doc-endnote"> <p><a rel="external" href="https://iuk.ktn-uk.org/news/ai-skills-for-business-guidance-feedback-consultation-call-from-the-alan-turing-institute/" class="govuk-link"><abbr title="artificial intelligence">AI</abbr> Skills for Business Guidance: Feedback Consultation Call from The Alan Turing Institute</a>, Innovate UK, 2023. <a href="#fnref:37" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:38" role="doc-endnote"> <p>A recent study by the Institute for the Future of Work shows that the net impact on skills and job creation for UK firms that have adopted <abbr title="artificial intelligence">AI</abbr> and robotics technologies is positive. However, these positive impacts on jobs and job quality are associated with the levels of readiness within a firm. See: <a rel="external" href="https://www.ifow.org/publications/adoption-of-ai-in-uk-firms-and-the-consequences-for-jobs" class="govuk-link">What drives UK firms to adopt <abbr title="artificial intelligence">AI</abbr> and robotics, and what are the consequences for jobs?</a>, Institute for the Future of Work, 2023. <a href="#fnref:38" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:39" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/the-impact-of-ai-on-uk-jobs-and-training" class="govuk-link">The impact of <abbr title="artificial intelligence">AI</abbr> on UK jobs and training</a>, Department for Education, 2023. <a href="#fnref:39" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:40" role="doc-endnote"> <p>Apprenticeships are for people aged 16 and over who are not in full time education. See: <a href="https://www.gov.uk/apply-apprenticeship" class="govuk-link">Find an apprenticeship,</a> Department for Education, n.d.. <a href="#fnref:40" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:41" role="doc-endnote"> <p>Skills Bootcamps are for adults aged 19 and over. See: <a href="https://www.gov.uk/guidance/find-a-skills-bootcamp" class="govuk-link">Find a skills bootcamp</a>, Department for Education, 2024 [2022]. <a href="#fnref:41" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:42" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/lifelong-learning-entitlement-lle-overview/lifelong-learning-entitlement-overview" class="govuk-link">Lifelong Learning Entitlement overview</a>, Department for Education, 2024. <a href="#fnref:42" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:43" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/a-world-class-education-system-the-advanced-british-standard" class="govuk-link">A world-class education system: The Advanced British Standard</a>, Department for Education, 2023. <a href="#fnref:43" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:44" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/creative-industries-sector-vision" class="govuk-link">Creative Industries Sector Vision</a>, Department for Culture, Media and Sport, 2023. <a href="#fnref:44" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:45" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper" class="govuk-link">Frontier <abbr title="artificial intelligence">AI</abbr>: capabilities and risks</a>, Department for Science, Innovation and Technology, 2023. <a href="#fnref:45" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:46" role="doc-endnote"> <p><a rel="external" href="https://link.springer.com/article/10.1007/s00146-023-01676-3" class="govuk-link">Algorithmic discrimination in the credit domain: what do we know about it?</a>, Ana Cristina Bicharra Garcia et al., 2023. <a href="#fnref:46" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:47" role="doc-endnote"> <p><a rel="external" href="https://www.nature.com/articles/s41599-023-02079-x" class="govuk-link">Ethics and discrimination in artificial intelligence-enabled recruitment practices,</a> Zhisheng Chen, 2023. <a href="#fnref:47" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:48" role="doc-endnote"> <p><a rel="external" href="https://fairnessinnovationchallenge.co.uk/" class="govuk-link">Fairness Innovation Challenge</a>, Department for Science, Innovation and Technology; Innovate UK, 2023. <a href="#fnref:48" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:49" role="doc-endnote"> <p><a rel="external" href="https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/" class="govuk-link">Guidance on <abbr title="artificial intelligence">AI</abbr> and data protection</a>, <abbr title="Information Commissioner's Office">ICO</abbr>, 2023. <a href="#fnref:49" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:50" role="doc-endnote"> <p><a rel="external" href="https://science.police.uk/delivery/resources/covenant-for-using-artificial-intelligence-ai-in-policing/" class="govuk-link">Covenant for Using Artificial Intelligence (<abbr title="artificial intelligence">AI</abbr>) in Policing</a>, National Police Chiefs’ Council, n.d.. <a href="#fnref:50" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:51" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper" class="govuk-link">Frontier <abbr title="artificial intelligence">AI</abbr>: capabilities and risks,</a> Department for Science, Innovation and Technology, 2023. <a href="#fnref:51" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:52" role="doc-endnote"> <p><a rel="external" href="https://misinforeview.hks.harvard.edu/article/misinformation-in-action-fake-news-exposure-is-linked-to-lower-trust-in-media-higher-trust-in-government-when-your-side-is-in-power/" class="govuk-link">Misinformation in action: Fake news exposure is linked to lower trust in media, higher trust in government when your side is in power</a>, Katherine Ognyanova et al., 2020. <a href="#fnref:52" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:53" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-ai-safety" class="govuk-link">Emerging Processes for Frontier <abbr title="artificial intelligence">AI</abbr> Safety,</a> Department for Science, Innovation and Technology, 2023. <a href="#fnref:53" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:54" role="doc-endnote"> <p><a href="https://www.gov.uk/cma-cases/ai-foundation-models-initial-review" class="govuk-link"><abbr title="artificial intelligence">AI</abbr> Foundation Models: initial review,</a> <abbr title="Competition and Markets Authority">CMA</abbr>, 2023. <a href="#fnref:54" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:55" role="doc-endnote"> <p><a rel="external" href="https://oxfordinsights.com/wp-content/uploads/2023/12/2023-Government-AI-Readiness-Index-2.pdf" class="govuk-link">Government <abbr title="artificial intelligence">AI</abbr> Readiness Index 202</a>3, Oxford Insights, 2023 <a href="#fnref:55" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:56" role="doc-endnote"> <p><a href="https://www.gov.uk/government/news/chancellor-to-cut-admin-workloads-to-free-up-frontline-staff" class="govuk-link">Chancellor to cut admin workloads to free up frontline staff,</a> HM Treasury; Home Office, 2023. <a href="#fnref:56" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:57" role="doc-endnote"> <p><a href="https://www.gov.uk/government/news/21-million-to-roll-out-artificial-intelligence-across-the-nhs" class="govuk-link">£21 million to roll out artificial intelligence across the <abbr title="National Health Service">NHS</abbr></a>, Department of Health and Social Care, 2023. <a href="#fnref:57" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:58" role="doc-endnote"> <p><a href="https://www.gov.uk/government/calls-for-evidence/generative-artificial-intelligence-in-education-call-for-evidence" class="govuk-link">Generative artificial intelligence in education call for evidence</a>, Department for Education, 2023. <a href="#fnref:58" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:59" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/generative-ai-framework-for-hmg/generative-ai-framework-for-hmg-html" class="govuk-link">Generative <abbr title="artificial intelligence">AI</abbr> Framework for HMG</a>, Cabinet Office and Central Digital and Data Office, 2024. <a href="#fnref:59" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:60" role="doc-endnote"> <p><a rel="external" href="https://www.ncsc.gov.uk/section/advice-guidance/all-topics?topics=Artificial%20intelligence&amp;sort=date%2Bdesc" class="govuk-link">Artificial Intelligence,</a> National Cyber Security Centre, n.d.. <a href="#fnref:60" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:61" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper" class="govuk-link">Frontier <abbr title="artificial intelligence">AI</abbr>: capabilities and risks</a>, Department for Science, Innovation and Technology, 2023. <a href="#fnref:61" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:62" role="doc-endnote"> <p><a rel="external" href="https://www.nature.com/articles/s42256-022-00465-9" class="govuk-link">Dual use of artificial-intelligence-powered drug discovery,</a> Fabio Urbina et al., 2022. <a href="#fnref:62" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:63" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper" class="govuk-link">Frontier <abbr title="artificial intelligence">AI</abbr>: capabilities and risks</a>, Department for Science, Innovation and Technology, 2023. <a href="#fnref:63" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:64" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/uk-biological-security-strategy" class="govuk-link">UK Biological Security Strategy,</a> Cabinet Office, 2023 <a href="#fnref:64" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:65" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/national-vision-for-engineering-biology" class="govuk-link">National vision for engineering biology,</a> Department for Science, Innovation and Technology, 2023. <a href="#fnref:65" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:66" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper" class="govuk-link">Frontier <abbr title="artificial intelligence">AI</abbr>: capabilities and risks</a>, Department for Science, Innovation and Technology, 2023. <a href="#fnref:66" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:67" role="doc-endnote"> <p><a rel="external" href="https://cdn.openai.com/papers/practices-for-governing-agentic-ai-systems.pdf" class="govuk-link">Practices for Governing Agentic <abbr title="artificial intelligence">AI</abbr> Systems</a>, Yonadav Shavit et al., 2023. <a href="#fnref:67" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:68" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper" class="govuk-link">Frontier <abbr title="artificial intelligence">AI</abbr>: capabilities and risks,</a> Department for Science, Innovation and Technology, 2023. <a href="#fnref:68" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:69" role="doc-endnote"> <p><a rel="external" href="https://www.reuters.com/technology/race-towards-autonomous-ai-agents-grips-silicon-valley-2023-07-17/" class="govuk-link">Race towards ‘autonomous’ <abbr title="artificial intelligence">AI</abbr> agents grips Silicon Valley</a>, Anna Tong and Jeffrey Dastin, 2023; <a rel="external" href="https://openai.com/blog/introducing-superalignment" class="govuk-link">Introducing superalignment</a>, Jan Leike and Ilya Sutskever (OpenAI), 2023; <a rel="external" href="https://deepmind.google/about/" class="govuk-link"><abbr title="artificial intelligence">AI</abbr> could be one of humanity’s most useful inventions</a>, Google Deepmind, n.d.. <a href="#fnref:69" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:70" role="doc-endnote"> <p><a rel="external" href="https://assets.publishing.service.gov.uk/media/653bc393d10f3500139a6ac5/future-risks-of-frontier-ai-annex-a.pdf" class="govuk-link">Future Risks of Frontier <abbr title="artificial intelligence">AI</abbr>,</a> Government Office for Science, 2023. <a href="#fnref:70" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:71" role="doc-endnote"> <p><a href="https://www.gov.uk/government/speeches/prime-ministers-speech-on-ai-26-october-2023" class="govuk-link">Prime Minister’s speech on <abbr title="artificial intelligence">AI</abbr>: 26 October 2023</a>, Prime Minister’s Office, 10 Downing Street, 2023. <a href="#fnref:71" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:72" role="doc-endnote"> <p><a rel="external" href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/" class="govuk-link">Pause Giant <abbr title="artificial intelligence">AI</abbr> Experiments: An Open Letter</a>, Future of Life Institute, 2023. <a href="#fnref:72" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:73" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach" class="govuk-link"><abbr title="artificial intelligence">AI</abbr> regulation: a pro-innovation approach</a>, Department for Science, Innovation and Technology, 2023. <a href="#fnref:73" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:74" role="doc-endnote"> <p>We note, for instance, the enforcement action of the <abbr title="Information Commissioner's Office">ICO</abbr> who have used data protection law to hold organisations using <abbr title="artificial intelligence">AI</abbr> systems that process personal data to account for breaches of data protection law. The <abbr title="Competition and Markets Authority">CMA</abbr>’s initial review of foundation models notes that accountability for obligations under competition and consumer law applies across the <abbr title="artificial intelligence">AI</abbr> life cycle to both developers and deployers. See: <a href="https://www.gov.uk/cma-cases/ai-foundation-models-initial-review" class="govuk-link"><abbr title="artificial intelligence">AI</abbr> Foundation Models: initial review,</a> <abbr title="Competition and Markets Authority">CMA</abbr>, 2023. Similarly, the Medicines and Medical Devices Act 2021 gives the <abbr title="Medicines and Healthcare products Regulatory Agency">MHRA</abbr> enforcement powers sufficient to hold manufacturers of medical devices accountable, including the power to require that unsafe devices are removed from the market. In addition, enforcement of serious non-compliance can, where appropriate, result in criminal prosecution through the courts. <a href="#fnref:74" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:75" role="doc-endnote"> <p>The same model may be deployed directly by the developer and also integrated into an almost limitless variety of systems, products and tools that will fall under the remit of multiple regulators. <a href="#fnref:75" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:76" role="doc-endnote"> <p>The law may allocate liability to “Quantum Talent Technologies” in this scenario if the actor has established an “agency” relationship according to equality law or was privately contractually obligated to abide by equality law. The law may also attribute liability along the supply chain in negligence if there is a duty of care that has been breached causing foreseeable damage. However, some laws only apply to actors based in the UK. In this scenario, data protection law would apply, allowing the <abbr title="Information Commissioner's Office">ICO</abbr> to take enforcement action for any failure by a relevant data controller (such as “Count Your Pennies Ltd”) to process personal data fairly and lawfully. <a href="#fnref:76" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:77" role="doc-endnote"> <p><a href="https://www.gov.uk/guidance/equality-act-2010-guidance" class="govuk-link">Equality Act 2010: guidance</a>, Government Equalities Office and Equality and Human Rights Commission, 2015 [2013]. <a href="#fnref:77" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:78" role="doc-endnote"> <p><a rel="external" href="https://www.aisafetysummit.gov.uk/policy-updates/#company-policies" class="govuk-link">Company Policies</a>, <abbr title="artificial intelligence">AI</abbr> Safety Summit, 2023. <a href="#fnref:78" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:79" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-ai-safety#data-input-controls-and-audits" class="govuk-link">Emerging Processes for Frontier <abbr title="artificial intelligence">AI</abbr> Safety,</a> Department for Science, Innovation and Technology, 2023. <a href="#fnref:79" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:80" role="doc-endnote"> <p>Responsible capability scaling is an emerging framework to manage risks associated with highly capable <abbr title="artificial intelligence">AI</abbr> and guide decision-making about <abbr title="artificial intelligence">AI</abbr> development and deployment. See: <a href="https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-ai-safety#responsible-capability-scaling" class="govuk-link">Responsible Capability Scaling</a> in <a href="https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-ai-safety" class="govuk-link">Emerging Processes for Frontier <abbr title="artificial intelligence">AI</abbr> Safety</a>. Department for Science, Innovation and Technology, 2023. <a href="#fnref:80" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:81" role="doc-endnote"> <p><a href="https://www.gov.uk/government/news/international-expertise-to-drive-international-ai-safety-report" class="govuk-link">International expertise to drive International <abbr title="artificial intelligence">AI</abbr> Safety Report</a>, Department for Science, Innovation and Technology, 2024. <a href="#fnref:81" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:82" role="doc-endnote"> <p>To support the government’s planning and policy development, and given the material uncertainties that exist, the Government Office for Science has prepared a foresight report outlining possible scenarios that may arise in the context of <abbr title="artificial intelligence">AI</abbr> development, proliferation and impact in 2030. See: <a href="https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper/future-risks-of-frontier-ai-annex-a" class="govuk-link">Future risks of frontier <abbr title="artificial intelligence">AI</abbr> (Annex A),</a> Government Office for Science, 2023. A full report on the scenarios will be published shortly (this report will not be a statement of government policy). <a href="#fnref:82" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:83" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach" class="govuk-link"><abbr title="artificial intelligence">AI</abbr> regulation: a pro-innovation approach</a>, Department for Science, Innovation and Technology, 2023. <a href="#fnref:83" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:84" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/uk-international-technology-strategy" class="govuk-link">UK International Technology Strategy,</a> Foreign, Commonwealth &amp; Development Office, 2023. <a href="#fnref:84" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:85" role="doc-endnote"> <p><a href="https://www.gov.uk/government/speeches/prime-ministers-speech-on-ai-26-october-2023" class="govuk-link">Prime Minister’s speech on <abbr title="artificial intelligence">AI</abbr>: 26 October 2023</a>, Prime Minister’s Office, 10 Downing Street, 2023. <a href="#fnref:85" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:86" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023" class="govuk-link">The Bletchley Declaration by Countries Attending the <abbr title="artificial intelligence">AI</abbr> Safety Summit, 1-2 November 2023</a>, Department for Science, Innovation, and Technology; Foreign, Commonwealth and Development Office; Prime Minister’s Office, 10 Downing Street, 2023. <a href="#fnref:86" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:87" role="doc-endnote"> <p><a href="https://www.gov.uk/government/news/world-leaders-top-ai-companies-set-out-plan-for-safety-testing-of-frontier-as-first-global-ai-safety-summit-concludes" class="govuk-link">World leaders, top <abbr title="artificial intelligence">AI</abbr> companies set out plan for safety testing of frontier as first global <abbr title="artificial intelligence">AI</abbr> Safety Summit concludes</a>, Prime Minister’s Office, 10 Downing Street; Department for Science, Innovation and Technology, 2023. <a href="#fnref:87" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:88" role="doc-endnote"> <p><a href="https://www.gov.uk/government/news/international-expertise-to-drive-international-ai-safety-report" class="govuk-link">International expertise to drive International <abbr title="artificial intelligence">AI</abbr> Safety Report</a>, Department for Science, Innovation and Technology, 2024. <a href="#fnref:88" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:89" role="doc-endnote"> <p><a rel="external" href="https://www.mofa.go.jp/ecm/ec/page5e_000076.html" class="govuk-link"><abbr title="Group of Seven">G7</abbr> Leaders’ Statement on the Hiroshima <abbr title="artificial intelligence">AI</abbr> Process</a>, Ministry of Foreign Affairs Government of Japan, 2023. <a href="#fnref:89" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:90" role="doc-endnote"> <p><a rel="external" href="https://www.mea.gov.in/bilateral-documents.htm?dtl/37084/G20_New_Delhi_Leaders_Declaration" class="govuk-link"><abbr title="Group of 20">G20</abbr> New Delhi Leaders’ Declaration</a>, Ministry of External Affairs Government of India, 2023 <a href="#fnref:90" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:91" role="doc-endnote"> <p><a rel="external" href="https://sdgs.un.org/goals" class="govuk-link">The 17 goals</a>, United Nations, 2023. <a href="#fnref:91" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:92" role="doc-endnote"> <p><a rel="external" href="https://gpai.ai/2023-GPAI-Ministerial-Declaration.pdf" class="govuk-link"><abbr title="Global Partnership on AI">GPAI</abbr> New Delhi Ministerial Declaration</a>, Global Partnership on <abbr title="artificial intelligence">AI</abbr>, 2023. <a href="#fnref:92" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:93" role="doc-endnote"> <p><a rel="external" href="https://oecd.ai/en/ai-principles" class="govuk-link"><abbr title="Organisation for Economic Co-operation and Development">OECD</abbr> <abbr title="artificial intelligence">AI</abbr> Principles overview</a>, <abbr title="Organisation for Economic Co-operation and Development">OECD</abbr>, 2024. <a href="#fnref:93" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:94" role="doc-endnote"> <p><a href="https://www.gov.uk/guidance/cdei-portfolio-of-ai-assurance-techniques" class="govuk-link"><abbr title="Centre for Data Ethics and Innovation">CDEI</abbr> portfolio of <abbr title="artificial intelligence">AI</abbr> assurance techniques,</a> Centre for Data Ethics and Innovation; Department for Science, Innovation and Technology, 2023. <a href="#fnref:94" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:95" role="doc-endnote"> <p><a rel="external" href="https://oecd.ai/en/" class="govuk-link">Catalogue of tools and metrics for trustworthy <abbr title="artificial intelligence">AI</abbr></a>, <abbr title="Organisation for Economic Co-operation and Development">OECD</abbr>, n.d.. <a href="#fnref:95" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:96" role="doc-endnote"> <p><a rel="external" href="https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence" class="govuk-link">Recommendation on the Ethics of Artificial Intelligence</a>, <abbr title="United Nations Educational, Scientific and Cultural Organization">UNESCO</abbr>, 2023. <a href="#fnref:96" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:97" role="doc-endnote"> <p><a rel="external" href="https://www.un.org/sites/un2.un.org/files/ai_advisory_body_interim_report.pdf" class="govuk-link">Governing <abbr title="artificial intelligence">AI</abbr> for Humanity,</a> United Nations, 2023. <a href="#fnref:97" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:98" role="doc-endnote"> <p><a rel="external" href="https://aistandardshub.org/" class="govuk-link">The <abbr title="artificial intelligence">AI</abbr> Standards Hub</a>, <abbr title="artificial intelligence">AI</abbr> Standards Hub, 2022. <a href="#fnref:98" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:99" role="doc-endnote"> <p><a href="https://www.gov.uk/government/news/uk-unites-with-global-partners-to-accelerate-development-using-ai" class="govuk-link">UK unites with global partners to accelerate development using <abbr title="artificial intelligence">AI</abbr>,</a> Foreign, Commonwealth &amp; Development Office, 2023. <a href="#fnref:99" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:100" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/international-science-partnerships-fund-ispf" class="govuk-link">International Science Partnerships Fund</a>, <abbr title="UK Research and Innovation">UKRI</abbr>, 2023. <a href="#fnref:100" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:101" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/the-atlantic-declaration#:~:text=On%208%20June%202023%2C%20the,the%20challenges%20of%20this%20moment." class="govuk-link">The Atlantic Declaration</a>, Prime Minister’s Office, 10 Downing Street, Foreign, Commonwealth &amp; Development Office, Department for Business and Trade, 2023.US <a href="#fnref:101" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:102" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/the-hiroshima-accord" class="govuk-link">The Hiroshima Accord: An enhanced UK-Japan global strategic partnership</a>, Prime Minister’s Office, 10 Downing Street, 2023. <a href="#fnref:102" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:103" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/the-downing-street-accord-a-united-kingdom-republic-of-korea-global-strategic-partnership" class="govuk-link">The Downing Street Accord: A United Kingdom-Republic of Korea Global Strategic Partnership</a>, Prime Minister’s Office, 10 Downing Street, 2023. <a href="#fnref:103" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:104" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/uk-singapore-joint-declaration-9-september-2023/joint-declaration-by-the-prime-ministers-of-the-republic-of-singapore-and-the-united-kingdom-of-great-britain-and-northern-ireland-on-a-strategic-part#:~:text=This%20Joint%20Declaration%20on%20the,and%20prosperity%20of%20our%20countries." class="govuk-link">Joint Declaration by the Prime Ministers of the Republic of Singapore and the United Kingdom of Great Britain and Northern Ireland on a Strategic Partnership</a>, Prime Minister’s Office, 10 Downing Street, 2023. <a href="#fnref:104" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:105" role="doc-endnote"> <p><a rel="external" href="https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/" class="govuk-link">Guidance on <abbr title="artificial intelligence">AI</abbr> and data protection</a>, <abbr title="Information Commissioner's Office">ICO</abbr>, 2023. <a href="#fnref:105" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:106" role="doc-endnote"> <p>Developed by the Department for Science, Innovation and Technology (<abbr title="Department for Science, Innovation and Technology">DSIT</abbr>) and Central Digital and Data Office (<abbr title="Central Digital and Data Office">CDDO</abbr>) for the public sector. <a href="#fnref:106" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:107" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-ai-safety" class="govuk-link">Emerging Processes for Frontier <abbr title="artificial intelligence">AI</abbr> Safety</a><a href="https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-ai-safety#data-input-controls-and-audits" class="govuk-link">,</a> Department for Science, Innovation and Technology, 2023.databases. <a href="#fnref:107" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:108" role="doc-endnote"> <p><a href="https://www.gov.uk/cma-cases/ai-foundation-models-initial-review" class="govuk-link"><abbr title="artificial intelligence">AI</abbr> Foundation Models: initial review,</a> <abbr title="Competition and Markets Authority">CMA</abbr>, 2023; <a rel="external" href="https://www.asa.org.uk/news/generative-ai-advertising-decoding-ai-regulation.html" class="govuk-link">Generative <abbr title="artificial intelligence">AI</abbr> &amp; Advertising: Decoding <abbr title="artificial intelligence">AI</abbr> Regulation</a>, ASA, 2023; <a rel="external" href="https://www.ofcom.org.uk/news-centre/2023/what-generative-ai-means-for-communications-sector" class="govuk-link">What generative <abbr title="artificial intelligence">AI</abbr> means for the communications sector</a>, <abbr title="Office of Communications">Ofcom</abbr>, 2023. <a href="#fnref:108" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:109" role="doc-endnote"> <p><a rel="external" href="https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-do-we-ensure-fairness-in-ai/" class="govuk-link">How do we ensure fairness in <abbr title="artificial intelligence">AI</abbr>?</a>, <abbr title="Information Commissioner's Office">ICO</abbr>, 2023; <a href="https://www.gov.uk/government/publications/software-and-artificial-intelligence-ai-as-a-medical-device/software-and-artificial-intelligence-ai-as-a-medical-device" class="govuk-link">Software and Artificial Intelligence (<abbr title="artificial intelligence">AI</abbr>) as a Medical Device,</a> <abbr title="Medicines and Healthcare products Regulatory Agency">MHRA</abbr>, updated 2023 [2021]. <a href="#fnref:109" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:110" role="doc-endnote"> <p><a rel="external" href="https://assets.publishing.service.gov.uk/media/655cd137544aea0019fb31e4/_8243__Government_Response_Draft_HMG_response_to_McLean_Cross-Cutting_Base_-_November_2023_PDF.pdf" class="govuk-link">Response to Professor Dame Angela McLean’s Pro-Innovation Regulation of Technologies Review: Cross Cutting</a>, HM Treasury, 2023. <a href="#fnref:110" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:111" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach" class="govuk-link"><abbr title="artificial intelligence">AI</abbr> regulation: a pro-innovation approach</a>, Department for Science, Innovation and Technology, 2023. <a href="#fnref:111" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:112" role="doc-endnote"> <p><a href="https://www.gov.uk/guidance/cdei-portfolio-of-ai-assurance-techniques" class="govuk-link"><abbr title="Centre for Data Ethics and Innovation">CDEI</abbr> portfolio of <abbr title="artificial intelligence">AI</abbr> assurance techniques,</a> Centre for Data Ethics and Innovation; Department for Science, Innovation and Technology, 2023. <a href="#fnref:112" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:113" role="doc-endnote"> <p><a rel="external" href="https://fairnessinnovationchallenge.co.uk/" class="govuk-link">Fairness Innovation Challenge</a>, Department for Science, Innovation and Technology; InnovateUK, 2023. <a href="#fnref:113" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:114" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-ai-safety" class="govuk-link">Emerging Processes for Frontier <abbr title="artificial intelligence">AI</abbr> Safety,</a> Department for Science, Innovation and Technology, 2023. <a href="#fnref:114" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:115" role="doc-endnote"> <p>For an overview of <abbr title="Department for Science, Innovation and Technology">DSIT</abbr>’s latest research on public attitudes to data and <abbr title="artificial intelligence">AI</abbr>, see: <a href="https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-3" class="govuk-link">Public attitudes to data and <abbr title="artificial intelligence">AI</abbr>: Tracker survey (Wave 3)</a>, Department for Science, Innovation, and Technology, 2023 <a href="#fnref:115" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:116" role="doc-endnote"> <p>The <abbr title="Algorithmic Transparency Recording Standard">ATRS</abbr> is the Algorithmic Transparency Recording Standard. For more detail see section 5.1. <a href="#fnref:116" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:117" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper" class="govuk-link">Frontier <abbr title="artificial intelligence">AI</abbr>: capabilities and risks</a>, Department for Science, Innovation, and Technology, 2023. <a href="#fnref:117" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:118" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023" class="govuk-link">The Bletchley Declaration by Countries Attending the <abbr title="artificial intelligence">AI</abbr> Safety Summit, 1-2 November 2023</a>, Department for Science, Innovation, and Technology; Foreign, Commonwealth and Development Office; Prime Minister’s Office, 10 Downing Street, 2023. <a href="#fnref:118" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:119" role="doc-endnote"> <p><a href="https://www.gov.uk/government/news/international-expertise-to-drive-international-ai-safety-report" class="govuk-link">International expertise to drive International <abbr title="artificial intelligence">AI</abbr> Safety Report</a>, Department for Science, Innovation and Technology, 2024. <a href="#fnref:119" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:120" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-ai-safety" class="govuk-link">Emerging Processes for Frontier <abbr title="artificial intelligence">AI</abbr> Safety,</a> Department for Science, Innovation and Technology, 2023. <a href="#fnref:120" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:121" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-ai-safety" class="govuk-link">Emerging Processes for Frontier <abbr title="artificial intelligence">AI</abbr> Safety,</a> Department for Science, Innovation and Technology, 2023. <a href="#fnref:121" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:122" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/office-for-artificial-intelligence-information-collection-and-analysis-privacy-notice/office-for-artificial-intelligence-information-collection-and-analysis-privacy-notice" class="govuk-link">Office for Artificial Intelligence – information collection and analysis: privacy notice,</a> Department for Science, Innovation and Technology, 2023. <a href="#fnref:122" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:123" role="doc-endnote"> <p><a href="https://www.gov.uk/government/publications/office-for-artificial-intelligence-information-collection-and-analysis-privacy-notice/office-for-artificial-intelligence-information-collection-and-analysis-privacy-notice" class="govuk-link">Office for Artificial Intelligence – information collection and analysis: privacy notice,</a> Department for Science, Innovation and Technology, 2023. <a href="#fnref:123" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> <li id="fn:124" role="doc-endnote"> <p><a rel="external" href="https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1147045/uk_ai_regulation_impact_assessment.pdf" class="govuk-link">UK Artificial Intelligence Regulation Impact Assessment</a>, Department for Science, Innovation and Technology, 2023. <a href="#fnref:124" class="govuk-link" role="doc-backlink" aria-label="go to where this is referenced">↩</a></p> </li> </ol> </div> </div> </div> </div> </div> </div> <div class="govuk-grid-row"> <a class="govuk-link app-c-back-to-top govuk-!-display-none-print" href="#contents"> <svg class="app-c-back-to-top__icon" xmlns="http://www.w3.org/2000/svg" width="13" height="17" viewbox="0 0 13 17" aria-hidden="true" focusable="false"> <path fill="currentColor" d="M6.5 0L0 6.5 1.4 8l4-4v12.7h2V4l4.3 4L13 6.4z"></path> </svg> Back to top </a> </div> </div> </main> </div> <div class="govuk-width-container"> <div data-module="feedback ga4-event-tracker" class="gem-c-feedback govuk-!-display-none-print"> <div class="gem-c-feedback__prompt gem-c-feedback__js-show js-prompt" tabindex="-1"> <div class="gem-c-feedback__prompt-content"> <div class="gem-c-feedback__prompt-questions js-prompt-questions" hidden> <div class="gem-c-feedback__prompt-question-answer"> <h2 class="gem-c-feedback__prompt-question">Is this page useful?</h2> <ul class="gem-c-feedback__option-list"> <li class="gem-c-feedback__option-list-item govuk-visually-hidden" hidden> <a class="gem-c-feedback__prompt-link" role="button" hidden="hidden" aria-hidden="true" href="/contact/govuk"> Maybe </a> </li> <li class="gem-c-feedback__option-list-item"> <button class="govuk-button gem-c-feedback__prompt-link js-page-is-useful" data-ga4-event='{"event_name":"form_submit","type":"feedback","text":"Yes","section":"Is this page useful?","tool_name":"Is this page useful?"}'> Yes <span class="govuk-visually-hidden">this page is useful</span> </button> </li> <li class="gem-c-feedback__option-list-item"> <button class="govuk-button gem-c-feedback__prompt-link js-toggle-form js-page-is-not-useful" aria-controls="page-is-not-useful" aria-expanded="false" data-ga4-event='{"event_name":"form_submit","type":"feedback","text":"No","section":"Is this page useful?","tool_name":"Is this page useful?"}'> No <span class="govuk-visually-hidden">this page is not useful</span> </button> </li> </ul> </div> </div> <div class="gem-c-feedback__prompt-questions gem-c-feedback__prompt-success js-prompt-success" role="alert" hidden> Thank you for your feedback </div> <div class="gem-c-feedback__prompt-questions gem-c-feedback__prompt-questions--something-is-wrong js-prompt-questions" hidden> <button class="govuk-button gem-c-feedback__prompt-link js-toggle-form js-something-is-wrong" aria-expanded="false" aria-controls="something-is-wrong" data-ga4-event='{"event_name":"form_submit","type":"feedback","text":"Report a problem with this page","section":"Is this page useful?","tool_name":"Is this page useful?"}'> Report a problem with this page </button> </div> </div> </div> <form action="https://www.gov.uk/contact/govuk/problem_reports" id="something-is-wrong" class="gem-c-feedback__form js-feedback-form" method="post" hidden> <div class="govuk-grid-row"> <div class="govuk-grid-column-two-thirds"> <div class="gem-c-feedback__error-summary gem-c-feedback__js-show js-errors" tabindex="-1" hidden></div> <input type="hidden" name="url" value="https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response"> <h3 class="gem-c-feedback__form-heading">Help us improve GOV.UK</h3> <p id="feedback_explanation" class="gem-c-feedback__form-paragraph">Don’t include personal or financial information like your National Insurance number or credit card details.</p> <div class="govuk-visually-hidden" aria-hidden="true"> <label for="giraffe">This field is for robots only. Please leave blank</label> <input id="giraffe" name="giraffe" type="text" pattern=".{0}" tabindex="-1" autocomplete="off"> </div> <div class="gem-c-textarea govuk-form-group govuk-!-margin-bottom-6"> <label for="textarea-13e724dd" class="gem-c-label govuk-label">What were you doing?</label> <textarea name="what_doing" class="govuk-textarea" id="textarea-13e724dd" rows="3" spellcheck="true" aria-describedby="feedback_explanation"> </textarea> </div> <div class="gem-c-textarea govuk-form-group govuk-!-margin-bottom-6"> <label for="textarea-b13d5c16" class="gem-c-label govuk-label">What went wrong?</label> <textarea name="what_wrong" class="govuk-textarea" id="textarea-b13d5c16" rows="3" spellcheck="true"> </textarea> </div> <button class="gem-c-button govuk-button" type="submit" data-ga4-event='{"event_name":"form_submit","type":"feedback","text":"Send","section":"Help us improve GOV.UK","tool_name":"Help us improve GOV.UK"}'>Send</button> <button class="govuk-button govuk-button--secondary gem-c-feedback__close gem-c-feedback__js-show js-close-form" aria-controls="something-is-wrong" aria-expanded="true"> Cancel </button> </div> </div> </form> <script nonce="5/ZjuHdQNbqS5S7VIOMX9A=="> //<![CDATA[ document.addEventListener("DOMContentLoaded", function () { var input = document.querySelector("#giraffe"), form = document.querySelector("#something-is-wrong") form.addEventListener("submit", spamCapture); function spamCapture(e) { if (input.value.length !== 0) return; e.preventDefault(); } }); //]]> </script> <div id="page-is-not-useful" class="gem-c-feedback__form gem-c-feedback__form--email gem-c-feedback__js-show js-feedback-form"> <div class="govuk-grid-row"> <div class="govuk-grid-column-two-thirds" id="survey-wrapper"> <div class="gem-c-feedback__error-summary js-errors" tabindex="-1" hidden></div> <h3 class="gem-c-feedback__form-heading">Help us improve GOV.UK</h3> <p id="survey_explanation" class="gem-c-feedback__form-paragraph"> To help us improve GOV.UK, we’d like to know more about your visit today. <a href="https://www.smartsurvey.co.uk/s/gov-uk-banner/?c=no-js" class="govuk-link" target="_blank" rel="noopener noreferrer external">Please fill in this survey (opens in a new tab)</a>. </p> <button class="govuk-button govuk-button--secondary js-close-form" aria-controls="page-is-not-useful" aria-expanded="true" hidden> Cancel </button> </div> </div> </div> </div> </div> <footer data-module="ga4-link-tracker" class="gem-c-layout-footer govuk-footer gem-c-layout-footer--border"> <div class="govuk-width-container"> <div class="govuk-footer__navigation"> <div class="govuk-grid-column-two-thirds govuk-!-display-none-print"> <h2 class="govuk-footer__heading govuk-heading-m">Services and information</h2> <ul class="govuk-footer__list govuk-footer__list--columns-2"> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"1","index_section":"1","index_section_count":"5","index_total":"16","section":"Services and information"}' href="https://www.gov.uk/browse/benefits">Benefits</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"2","index_section":"1","index_section_count":"5","index_total":"16","section":"Services and information"}' href="https://www.gov.uk/browse/births-deaths-marriages">Births, death, marriages and care</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"3","index_section":"1","index_section_count":"5","index_total":"16","section":"Services and information"}' href="https://www.gov.uk/browse/business">Business and self-employed</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"4","index_section":"1","index_section_count":"5","index_total":"16","section":"Services and information"}' href="https://www.gov.uk/browse/childcare-parenting">Childcare and parenting</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"5","index_section":"1","index_section_count":"5","index_total":"16","section":"Services and information"}' href="https://www.gov.uk/browse/citizenship">Citizenship and living in the UK</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"6","index_section":"1","index_section_count":"5","index_total":"16","section":"Services and information"}' href="https://www.gov.uk/browse/justice">Crime, justice and the law</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"7","index_section":"1","index_section_count":"5","index_total":"16","section":"Services and information"}' href="https://www.gov.uk/browse/disabilities">Disabled people</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"8","index_section":"1","index_section_count":"5","index_total":"16","section":"Services and information"}' href="https://www.gov.uk/browse/driving">Driving and transport</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"9","index_section":"1","index_section_count":"5","index_total":"16","section":"Services and information"}' href="https://www.gov.uk/browse/education">Education and learning</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"10","index_section":"1","index_section_count":"5","index_total":"16","section":"Services and information"}' href="https://www.gov.uk/browse/employing-people">Employing people</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"11","index_section":"1","index_section_count":"5","index_total":"16","section":"Services and information"}' href="https://www.gov.uk/browse/environment-countryside">Environment and countryside</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"12","index_section":"1","index_section_count":"5","index_total":"16","section":"Services and information"}' href="https://www.gov.uk/browse/housing-local-services">Housing and local services</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"13","index_section":"1","index_section_count":"5","index_total":"16","section":"Services and information"}' href="https://www.gov.uk/browse/tax">Money and tax</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"14","index_section":"1","index_section_count":"5","index_total":"16","section":"Services and information"}' href="https://www.gov.uk/browse/abroad">Passports, travel and living abroad</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"15","index_section":"1","index_section_count":"5","index_total":"16","section":"Services and information"}' href="https://www.gov.uk/browse/visas-immigration">Visas and immigration</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"16","index_section":"1","index_section_count":"5","index_total":"16","section":"Services and information"}' href="https://www.gov.uk/browse/working">Working, jobs and pensions</a> </li> </ul> </div> <div class="govuk-grid-column-one-third govuk-!-display-none-print"> <h2 class="govuk-footer__heading govuk-heading-m">Government activity</h2> <ul class="govuk-footer__list govuk-footer__list--columns-1"> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"1","index_section":"2","index_section_count":"5","index_total":"8","section":"Government activity"}' href="https://www.gov.uk/government/organisations">Departments</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"2","index_section":"2","index_section_count":"5","index_total":"8","section":"Government activity"}' href="https://www.gov.uk/search/news-and-communications">News</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"3","index_section":"2","index_section_count":"5","index_total":"8","section":"Government activity"}' href="https://www.gov.uk/search/guidance-and-regulation">Guidance and regulation</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"4","index_section":"2","index_section_count":"5","index_total":"8","section":"Government activity"}' href="https://www.gov.uk/search/research-and-statistics">Research and statistics</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"5","index_section":"2","index_section_count":"5","index_total":"8","section":"Government activity"}' href="https://www.gov.uk/search/policy-papers-and-consultations">Policy papers and consultations</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"6","index_section":"2","index_section_count":"5","index_total":"8","section":"Government activity"}' href="https://www.gov.uk/search/transparency-and-freedom-of-information-releases">Transparency</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"7","index_section":"2","index_section_count":"5","index_total":"8","section":"Government activity"}' href="https://www.gov.uk/government/how-government-works">How government works</a> </li> <li class="govuk-footer__list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"8","index_section":"2","index_section_count":"5","index_total":"8","section":"Government activity"}' href="https://www.gov.uk/government/get-involved">Get involved</a> </li> </ul> </div> </div> <hr class="govuk-footer__section-break govuk-!-display-none-print"> <div class="govuk-footer__meta"> <div class="govuk-footer__meta-item govuk-footer__meta-item--grow"> <h2 class="govuk-visually-hidden">Support links</h2> <ul class="govuk-footer__inline-list govuk-!-display-none-print"> <li class="govuk-footer__inline-list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"1","index_section":"3","index_section_count":"5","index_total":"8","section":"Support links"}' href="https://www.gov.uk/help">Help</a> </li> <li class="govuk-footer__inline-list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"2","index_section":"3","index_section_count":"5","index_total":"8","section":"Support links"}' href="https://www.gov.uk/help/privacy-notice">Privacy</a> </li> <li class="govuk-footer__inline-list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"3","index_section":"3","index_section_count":"5","index_total":"8","section":"Support links"}' href="https://www.gov.uk/help/cookies">Cookies</a> </li> <li class="govuk-footer__inline-list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"4","index_section":"3","index_section_count":"5","index_total":"8","section":"Support links"}' href="https://www.gov.uk/help/accessibility-statement">Accessibility statement</a> </li> <li class="govuk-footer__inline-list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"5","index_section":"3","index_section_count":"5","index_total":"8","section":"Support links"}' href="https://www.gov.uk/contact">Contact</a> </li> <li class="govuk-footer__inline-list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"6","index_section":"3","index_section_count":"5","index_total":"8","section":"Support links"}' href="https://www.gov.uk/help/terms-conditions">Terms and conditions</a> </li> <li class="govuk-footer__inline-list-item"> <a class="govuk-footer__link" lang="cy" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"7","index_section":"3","index_section_count":"5","index_total":"8","section":"Support links"}' href="https://www.gov.uk/cymraeg">Rhestr o Wasanaethau Cymraeg</a> </li> <li class="govuk-footer__inline-list-item"> <a class="govuk-footer__link" data-ga4-link='{"event_name":"navigation","type":"footer","index_link":"8","index_section":"3","index_section_count":"5","index_total":"8","section":"Support links"}' href="https://www.gov.uk/government/organisations/government-digital-service">Government Digital Service</a> </li> </ul> <svg aria-hidden="true" focusable="false" class="govuk-footer__licence-logo" xmlns="http://www.w3.org/2000/svg" viewbox="0 0 483.2 195.7" height="17" width="41"> <path fill="currentColor" d="M421.5 142.8V.1l-50.7 32.3v161.1h112.4v-50.7zm-122.3-9.6A47.12 47.12 0 0 1 221 97.8c0-26 21.1-47.1 47.1-47.1 16.7 0 31.4 8.7 39.7 21.8l42.7-27.2A97.63 97.63 0 0 0 268.1 0c-36.5 0-68.3 20.1-85.1 49.7A98 98 0 0 0 97.8 0C43.9 0 0 43.9 0 97.8s43.9 97.8 97.8 97.8c36.5 0 68.3-20.1 85.1-49.7a97.76 97.76 0 0 0 149.6 25.4l19.4 22.2h3v-87.8h-80l24.3 27.5zM97.8 145c-26 0-47.1-21.1-47.1-47.1s21.1-47.1 47.1-47.1 47.2 21 47.2 47S123.8 145 97.8 145"></path> </svg> <span class="govuk-footer__licence-description" data-ga4-track-links-only data-ga4-link='{"event_name":"navigation","section":"Licence","index_section":"4","index_link":"1","index_section_count":"5","text":"Open Government Licence v3.0","index_total":"1","type":"footer"}'> All content is available under the <a class="govuk-footer__link" href="https://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/" rel="license">Open Government Licence v3.0</a>, except where otherwise stated </span> </div> <div class="govuk-footer__meta-item" data-ga4-link='{"event_name":"navigation","section":"Copyright","index_section":"5","index_link":"1","index_section_count":"5","text":"© Crown copyright","index_total":"1","type":"footer"}'> <a class="govuk-footer__link govuk-footer__copyright-logo" href="https://www.nationalarchives.gov.uk/information-management/re-using-public-sector-information/uk-government-licensing-framework/crown-copyright/">© Crown copyright</a> </div> </div> </div> </footer> <script src="/assets/static/application-b99c730a51dedb2a3e3315cc0bc81947c2cb065952ff296bc3747c84d92c217c.js" type="module"></script> <script src="/assets/government-frontend/application-a5f137ecbb7d8dc1b5470d4f66e3ec63680098e906fcf1d213801ff0d72b94ab.js" type="module"></script><script type="application/ld+json"> { "@context": "http://schema.org", "@type": "FAQPage", "mainEntityOfPage": { "@type": "WebPage", "@id": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response" }, "name": "A pro-innovation approach to AI regulation: government response", "datePublished": "2024-02-06T10:32:07+00:00", "dateModified": "2024-02-06T10:32:06+00:00", "text": null, "publisher": { "@type": "Organization", "name": "GOV.UK", "url": "https://www.gov.uk", "logo": { "@type": "ImageObject", "url": "https://www.gov.uk/assets/government-frontend/govuk_publishing_components/govuk-logo-b15a4d254746d1642b8187217576d1e8fe50b51352d352fda13eee55d3c1c80a.png" } }, "image": [ "https://www.gov.uk/assets/government-frontend/govuk_publishing_components/govuk-schema-placeholder-1x1-c3d38c0d4fc00005df38a71e1db7097276681d6917bca58f0dc8336a252e1bb3.png", "https://www.gov.uk/assets/government-frontend/govuk_publishing_components/govuk-schema-placeholder-4x3-edc38c137a14ecfc3fc83f404090e20dab806dad345c96a1df6a163ee2d1e3aa.png", "https://www.gov.uk/assets/government-frontend/govuk_publishing_components/govuk-schema-placeholder-16x9-5dc2d0ea1eb72cd94e66db210ef41b22ce364e7ed09d63a3acc28fda09e27864.png" ], "author": { "@type": "Organization", "name": "Department for Science, Innovation and Technology", "url": "https://www.gov.uk/government/organisations/department-for-science-innovation-and-technology" }, "mainEntity": [ { "@type": "Question", "name": "\n1. Ministerial foreword", "url": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#ministerial-foreword", "acceptedAnswer": { "@type": "Answer", "url": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#ministerial-foreword", "text": "\u003cfigure class=\"image embedded\"\u003e\u003cdiv class=\"img\"\u003e\u003cimg src=\"https://assets.publishing.service.gov.uk/media/65af799ffd784b000de0c6bf/sos_minister_michelle_govuk.jpg\" alt=\"\"\u003e\u003c/div\u003e\n\u003cfigcaption\u003e\u003cp\u003eThe Rt Hon Michelle Donelan MP, Secretary of State for Science, Innovation and Technology.\u003c/p\u003e\u003c/figcaption\u003e\u003c/figure\u003e\u003cp\u003eThe world is on the cusp of an extraordinary new era driven by advances in Artificial Intelligence (\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e). I see the rapid improvements in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e capabilities as a once-in-a-generation opportunity for the British people to revolutionise our public services for the better and to deliver real, tangible, long-term results for our country.\u003c/p\u003e\u003cp\u003eThe UK \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e market is predicted to grow to over $1 trillion (\u003cabbr title=\"United States dollar\"\u003eUSD\u003c/abbr\u003e) by 2035\u003csup id=\"fnref:1\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:1\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 1]\u003c/a\u003e\u003c/sup\u003e – unlocking everything from new skills and jobs to once unimaginable life saving treatments for cruel diseases like cancer and dementia. My ambition is for us to revolutionise the way we deliver public services by becoming a global leader in safe \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e development and deployment.\u003c/p\u003e\u003cp\u003eWe have done more than any government in history to make that a reality, and our plan is working. Last year, we hosted the world’s first \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit, bringing industry, academia, and civil society together with 28 leading \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e nations and the EU to agree the Bletchley Declaration – a landmark commitment to share responsibility on mitigating the risks of frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, collaborate on safety and research, and to promote its potential as a force for good in this world.\u003c/p\u003e\u003cp\u003eWe were the first government in the world to formally publish our assessment of the capabilities and risks presented by advanced \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. Research-driven reports produced by \u003cabbr title=\"Department for Science, Innovation and Technology\"\u003eDSIT\u003c/abbr\u003e and the Government Office for Science\u003csup id=\"fnref:2\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:2\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 2]\u003c/a\u003e\u003c/sup\u003e laid the groundwork for an international agreement on evaluating the scientific basis for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety.\u003c/p\u003e\u003cp\u003eWe brought together a powerful consortium of experts in our \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute, the first government-backed organisation of its kind anywhere in the world, committed to advancing \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety in the public interest.\u003c/p\u003e\u003cp\u003eWith the publication of our \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper in March 2023, I wanted to take a bold and considered approach that is strongly pro-innovation and pro-safety. I knew that our approach had to remain agile enough to deal with the unprecedented speed of development, while also remaining robust enough in each sector to address the key concerns around potential societal harms, misuse risks, and autonomy risks that our thought leadership exercises have revealed.\u003c/p\u003e\u003cp\u003eThis agile, sector-based approach has empowered regulators to create bespoke measures that are tailored to the various needs and risks posed by different sections of our economy. The white paper proposed five clear principles for existing UK regulators to follow, and set out our expectations for responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation.\u003c/p\u003e\u003cp\u003eThis common sense, pragmatic approach has been welcomed and endorsed both by the companies at the frontier of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e development and leading \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety experts. Google DeepMind, Microsoft, OpenAI and Anthropic all supported the UK’s approach, as did Britain’s budding \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e start-up scene, and many leading voices in academia and civil society.\u003c/p\u003e\u003cp\u003eIn considering our response to the consultation, I have sought to double-down on this success and drive forward our plans to make Britain the safest and most innovative place to develop and deploy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in the world, backed by over £100 million to support \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation and regulation. Building on feedback from the consultation, we have set up a central function to drive coherence in our regulatory approach across government, including by recruiting a new multidisciplinary team to conduct cross-sector risk assessment and monitoring to guard against existing and emerging risks in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.\u003c/p\u003e\u003cp\u003eWith the Digital Regulation Cooperation Forum (\u003cabbr title=\"Digital Regulation Cooperation Forum\"\u003eDRCF\u003c/abbr\u003e), we have launched the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and Digital Hub, a pilot scheme for a brand-new advisory service to support innovation run by expert regulators including \u003cabbr title=\"Office of Communications\"\u003eOfcom\u003c/abbr\u003e, the \u003cabbr title=\"Competition and Markets Authority\"\u003eCMA\u003c/abbr\u003e, the \u003cabbr title=\"Financial Conduct Authority\"\u003eFCA\u003c/abbr\u003e and the \u003cabbr title=\"Information Commissioner's Office\"\u003eICO\u003c/abbr\u003e\u003csup id=\"fnref:3\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:3\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 3]\u003c/a\u003e\u003c/sup\u003e. We are also investing in new support for regulators to build their practical, technical expertise and backing the launch of nine new research hubs across the UK to harness the power of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in everything from mathematics to healthcare.\u003c/p\u003e\u003cp\u003eAdvancing our thought-leadership on safety, we also lay out the case for a set of targeted, binding requirements on developers of highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models in the future to ensure that powerful, sophisticated \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e develops in a way which is safe. And our targeted consultations on our cross-economy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risk register and monitoring and evaluation framework will engage with leading voices from regulators, academia, civil society, and industry.\u003c/p\u003e\u003cp\u003eThe \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute’s technical experts will have a crucial role to play here as we develop our approach on the regulation of highly capable general-purpose systems. We will work closely with \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e developers, with academics and civil society members who can provide independent expert perspectives, and also with our international partners ahead of the next \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summits in the Republic of Korea and France.\u003c/p\u003e\u003cp\u003eFinally, my thinking on the UK’s \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e leadership role goes well beyond the immediate horizon. We will need to lead fields of research that will help us build a more resilient society ready for a world where advanced \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technology and the means to develop it are widely accessible. That means improving our defensive capabilities against bad actors seeking to use \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e to do harm, it means designing new internet infrastructure for a digital world full of agentic \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems, and it also means leveraging \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e to improve critical aspects of our society such as democratic deliberation and consensus. \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e can and must remain a force for the public good, and we will ensure that is the case as we develop our policy approach in this area.\u003c/p\u003e\u003cp\u003eThis response paper is another clear, decisive step forward for the UK’s ambitions to lead in safe \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and to be a Science and Technology Superpower by the end of the decade. Whether you are an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e developer, user, safety researcher or you represent civil society, we all have a shared interest in realising the opportunities of safe \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e development. I am personally driven by a mission to improve the lives of the British people through technology and innovation, and our response paper sets out exactly how that mission will become a reality.\u003c/p\u003e" } }, { "@type": "Question", "name": "\n2. Executive summary", "url": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#executive-summary", "acceptedAnswer": { "@type": "Answer", "url": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#executive-summary", "text": "\u003cul\u003e\n \u003cli\u003e\n \u003cp\u003eThe pace of progress in Artificial Intelligence (\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e) has been unlike any previous technology and the benefits are already being realised across the UK: \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is helping to make our jobs safer and more satisfying, conserve our wildlife and fight climate change, and make our public services more efficient. Not only do we need to plan for the capabilities and uses of the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems we have today, but we must also prepare for a near future where the most powerful systems are broadly accessible and significantly more capable\u003csup id=\"fnref:4\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:4\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 4]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eThe UK is leading the world in how to respond to this challenge. Our approach to preparing for such a future is firmly pro-innovation. To realise the immense benefits of these technologies, we must ensure \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e’s trustworthiness and public adoption through a strong pro-safety approach. As the Prime Minister set out in a landmark speech in October 2023, “the future of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is safe \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.  And by making the UK a global leader in safe \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, we will attract even more of the new jobs and investment that will come from this new wave of technology”\u003csup id=\"fnref:5\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:5\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 5]\u003c/a\u003e\u003c/sup\u003e. To achieve this, the UK is investing more in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety than any other country in the world. Today we are announcing over £100 million to help realise new \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovations and support regulators’ technical capabilities.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eOur regulatory framework builds on the existing strengths of both our thriving \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e industry and expert regulatory ecosystem. We are focused on ensuring that regulators are prepared to face the new challenges and opportunities that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e can bring to their domains. By working closely with regulators to ensure cohesion across the landscape, we are ensuring that innovators can bring new products to market safely and quickly. Today we are announcing several new initiatives to make the UK an even better place to build and use \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e including £10 million to jumpstart regulator’s \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e capabilities; a new commitment by UK Research and Innovation (\u003cabbr title=\"UK Research and Innovation\"\u003eUKRI\u003c/abbr\u003e) that future investments in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e research will be leveraged to support regulator skills and expertise; and a £9 million partnership with the US on responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e as part of our International Science Partnerships Fund\u003csup id=\"fnref:6\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:6\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 6]\u003c/a\u003e\u003c/sup\u003e. Through this and other work on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e across government, the UK will continue to respond to risks proportionately and effectively, striving to lead thinking on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in the years to come. Through this and other work on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e across government, the UK will continue to respond to risks proportionately and effectively, striving to lead thinking on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in the years to come.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eIn March 2023, we published our \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper, setting out initial proposals to develop a pro-innovation regulatory framework for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. The proposed framework outlined five cross-sectoral principles for the UK’s existing regulators to interpret and apply within their remits. We also proposed a new central function to bring coherence to the regime and address regulatory gaps. This flexible and adaptive regulatory approach has enabled us to act decisively and respond to technological progress.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eOur context-based framework received strong support from stakeholders across society and we have acted quickly to implement it. We are pleased that a number of regulators are already taking action in line with our proposed approach, from the Competition and Market Authority’s (\u003cabbr title=\"Competition and Markets Authority\"\u003eCMA\u003c/abbr\u003e) review of foundation models to the updated guidance on data protection and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e by the Information Commissioner’s Office (\u003cabbr title=\"Information Commissioner's Office\"\u003eICO\u003c/abbr\u003e). We are asking a number of regulators to publish an update outlining their strategic approach to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e by 30 April 2024.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eWe have already started developing the central function to support effective risk monitoring, regulator coordination, and knowledge exchange. Our new £10 million package to boost regulators’ \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e capabilities, mentioned above, will help our regulators develop cutting-edge research and practical tools to build the foundations of their \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e expertise and everyday ability to address \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks in their domains. Today, we are also publishing new guidance to support regulators to implement the principles effectively and the Digital Regulation Cooperation Forum (\u003cabbr title=\"Digital Regulation Cooperation Forum\"\u003eDRCF\u003c/abbr\u003e) is sharing details on the eligibility criteria for the support to be offered by the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and Digital Hub pilot.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eWe are backing this approach with wider support for the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e ecosystem, including committing over £1.5 billion in 2023 to build the next generation of supercomputers in the public sector and today announcing an £80 million boost in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e research through the launch of nine new research hubs across the UK to propel transformative innovations. In November 2023, the Prime Minister brought together leading global actors in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e for the first \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit where they discussed and agreed actions to address emerging risks posed by the development and deployment of the most powerful \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems. Leading \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e developers set out the steps they are already taking to make models safe and committed to sharing the most powerful \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models with governments for testing so that we can ensure safety today and prepare for the risks of tomorrow.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eOur initial technical contribution to this international effort is through the creation of an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute to lead evaluations and safety research in the UK government, in collaboration with partners across the world including in the US. The \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit underscored the global nature of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e development and deployment, demonstrating the need for further work towards a coherent and collaborative approach to international governance.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eOur overall approach – combining cross-sectoral principles and a context-specific framework, international leadership and collaboration, and voluntary measures on developers – is right today as it allows us to keep pace with rapid and uncertain advances in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. However, the challenges posed by \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies will ultimately require legislative action in every country once understanding of risk has matured. In this document, we build on our pro-innovation framework and pro-safety actions by setting out our early thinking and the questions that we will need to consider for the next stage of our regulatory approach.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAs \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems advance in capability and societal impact, it is clear that some mandatory measures will ultimately be required across all jurisdictions to address potential \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related harms, ensure public safety, and let us realise the transformative opportunities that the technology offers. However, acting before we properly understand the risks and appropriate mitigations would harm our ability to benefit from technological progress while leaving us unable to adapt quickly to emerging risks. We are going to take our time to get this right – we will legislate when we are confident that it is the right thing to do.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eWe have placed a particular emphasis on the challenges that highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems pose to a context-based framework. Here we lay out a pro-innovation case for further targeted binding requirements on the small number of organisations developing highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems to ensure that they are accountable for making these technologies sufficiently safe. This can be done while allowing our expert regulators to provide effective rules for the use of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e within their remits.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eIn the coming months, we will formally establish our activities to support regulator capabilities and coordination, including a new steering committee with government and regulator representatives to support coordination across the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance landscape. We will conduct targeted consultations on our cross-economy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risk register and plan to assess the regulatory framework. We will continue our work to address the key issues of today, from electoral interference to discrimination to intellectual property law, and the most pressing risks of tomorrow, such as biosecurity and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e alignment. We will also continue to lead international conversations on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance across a range of fora and initiatives in the lead up to the next \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summits in the Republic of Korea and France.\u003c/p\u003e\n \u003c/li\u003e\n\u003c/ul\u003e" } }, { "@type": "Question", "name": "\n3. Glossary", "url": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#glossary", "acceptedAnswer": { "@type": "Answer", "url": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#glossary", "text": "\u003cp\u003e\u003cstrong\u003eAdaptivity\u003c/strong\u003e: The ability to see patterns and make decisions in ways not directly envisioned by human programmers.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eArtificial General Intelligence (\u003cabbr title=\"Artificial General Intelligence\"\u003eAGI\u003c/abbr\u003e)\u003c/strong\u003e: A theoretical form of advanced \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e that would have capabilities that compare to or exceed humans across most economically valuable work\u003csup id=\"fnref:7\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:7\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 7]\u003c/a\u003e\u003c/sup\u003e. A number of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e companies have publicly stated their aim to build \u003cabbr title=\"Artificial General Intelligence\"\u003eAGI\u003c/abbr\u003e and believe it may be achievable within the next twenty years. Other experts believe we may not build \u003cabbr title=\"Artificial General Intelligence\"\u003eAGI\u003c/abbr\u003e for many decades, if ever.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e agents\u003c/strong\u003e: Autonomous \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems that perform multiple sequential steps – sometimes including actions like browsing the internet, sending emails, or sending instructions to physical equipment – to try and complete a high-level task or goal.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e deployers\u003c/strong\u003e: Any individual or organisation that supplies or uses an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e application to provide a product or service to an end user.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e developers\u003c/strong\u003e: Organisations or individuals who design, build, train, adapt, or combine \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models and applications.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e end user\u003c/strong\u003e: Any intended or actual individual or organisation that uses or consumes an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-based product or service as it is deployed.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e life cycle\u003c/strong\u003e: All events and processes that relate to an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e system’s lifespan, from inception to decommissioning, including its design, research, training, development, deployment, integration, operation, maintenance, sale, use, and governance.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks\u003c/strong\u003e: The potential negative or harmful outcomes arising from the development or deployment of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eAlignment\u003c/strong\u003e: The process of ensuring an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e system’s goals and behaviours are in line with human values and intentions.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eApplication Programming Interface (API)\u003c/strong\u003e: A set of rules and protocols that enables integration and communication between \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems and other software applications.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eAutonomous\u003c/strong\u003e: Capable of operating, taking actions, or making decisions without the express intent or oversight of a human.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eCapabilities\u003c/strong\u003e: The range of tasks or functions that an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e system can perform and the proficiency with which it can perform them.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eCompute\u003c/strong\u003e: Computational processing power, including Central Processing Units (\u003cabbr title=\"Central Processing Units\"\u003eCPUs\u003c/abbr\u003e), Graphics Processing Units (\u003cabbr title=\"Graphics Processing Units\"\u003eGPUs\u003c/abbr\u003e), and other hardware, used to run \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models and algorithms.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eDevelopers of highly capable general-purpose systems\u003c/strong\u003e: A subsection of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e developers, these organisations invest large amounts of resource into designing, building, and pre-training the most capable \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e foundation models. These models can underpin a wide range of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e applications and may be deployed directly or adapted by downstream \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e developers.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eDisinformation\u003c/strong\u003e: Deliberately false information spread with the intent to deceive or mislead.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eFoundation models\u003c/strong\u003e: Machine learning models trained on very large amounts of data that can be adapted to a wide range of tasks.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eFrontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e\u003c/strong\u003e: For the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit, we defined frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e as models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models. In this paper, we focus on highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e model developers to target our proposals for new responsibilities.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eMisinformation\u003c/strong\u003e: Incorrect or misleading information spread without harmful intent.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eSafety and security\u003c/strong\u003e: The protection, wellbeing, and autonomy of civil society and the population\u003csup id=\"fnref:8\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:8\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 8]\u003c/a\u003e\u003c/sup\u003e. In this publication, safety is often used to describe prevention of or protection against \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related harms. \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e security refers to protecting \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems from technical interference such as cyber-attacks\u003csup id=\"fnref:9\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:9\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 9]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eSuperhuman performance\u003c/strong\u003e: When an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e model demonstrates capabilities that exceed human ability benchmarking for a specific task or activity.\u003c/p\u003e\u003cdiv class=\"call-to-action\"\u003e\n \u003ch3 id=\"box-1-different-types-of-ai-systems\"\u003eBox 1: Different types of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems\u003c/h3\u003e\n\n \u003cp\u003eIn our discussion paper on frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e capabilities and risks\u003csup id=\"fnref:10\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:10\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 10]\u003c/a\u003e\u003c/sup\u003e, we noted that definitions of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e are often challenging due to the quick advancements in the technology.\u003c/p\u003e\n\n \u003cp\u003eFor the purposes of developing a proportionate regulatory approach that effectively addresses the risks posed by the most powerful \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems, we currently distinguish between:\u003c/p\u003e\n\n \u003col\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cstrong\u003eHighly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e\u003c/strong\u003e: Foundation models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models. Generally, such models will span from novice through to expert capabilities with some even showing superhuman performance across a range of tasks.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cstrong\u003eHighly capable narrow \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e\u003c/strong\u003e: Foundation models that can perform a narrow set of tasks, normally within a specific field such as biology, with capabilities that match or exceed those present in today’s most advanced models. Generally, such models will demonstrate superhuman abilities on these narrow tasks or domains.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cstrong\u003eAgentic \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e or \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e agents\u003c/strong\u003e: An emerging subset of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies that can competently complete tasks over long timeframes and with multiple steps. These systems can use tools such as coding environments, the internet, and narrow \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models to complete tasks.\u003c/p\u003e\n \u003c/li\u003e\n \u003c/ol\u003e\n\u003c/div\u003e" } }, { "@type": "Question", "name": "\n4. Introduction", "url": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#introduction", "acceptedAnswer": { "@type": "Answer", "url": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#introduction", "text": "\u003cp\u003e1. The UK’s \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e sector is thriving. The \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e industry in the UK employs over 50,000 people and contributes £3.7 billion to economy\u003csup id=\"fnref:11\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:11\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 11]\u003c/a\u003e\u003c/sup\u003e. Our universities produce some of the best \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e research and talent, and the UK is home to the third largest number of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e unicorns and start-ups in the world\u003csup id=\"fnref:12\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:12\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 12]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\u003cp\u003e2. Our goal is to make the UK a great place to build and use \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e that changes our lives for the better. \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is the defining technology of our time and the UK is leading the world with our response.\u003c/p\u003e\u003cp\u003e3. In March 2023, we published a white paper setting out our proposals to establish a regulatory framework for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e to drive safe, responsible innovation\u003csup id=\"fnref:13\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:13\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 13]\u003c/a\u003e\u003c/sup\u003e. We set five principles for regulators to interpret and apply within their domains. We also included proposals for a central function within government to conduct a range of activities such as risk assessment and regulatory coordination to support the adaptability and coherence of our approach.\u003c/p\u003e\u003cp\u003e4. We held a 12-week public consultation on our proposals\u003csup id=\"fnref:14\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:14\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 14]\u003c/a\u003e\u003c/sup\u003e. We have now analysed the evidence (see Annex A for details) which has informed our approach. We thank everyone for their submissions. We have also built into our response the key achievements from the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit in November 2023, as well as themes from our engagement ahead of the Summit.\u003c/p\u003e\u003ch3 id=\"ai-white-paper-consultation-and-ai-summit-activities\"\u003e\n\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e White Paper consultation and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Summit activities\u003c/h3\u003e\u003cfigure class=\"image embedded\"\u003e\u003cdiv class=\"img\"\u003e\u003cimg src=\"https://assets.publishing.service.gov.uk/media/65b8ddf3e9e10a0013031086/ai-white-paper-consultation-ai-summit-activities.svg\" alt=\"\"\u003e\u003c/div\u003e\n\u003cfigcaption\u003e\u003cp\u003eAI White Paper consultation and AI Summit activities.\u003c/p\u003e\u003c/figcaption\u003e\u003c/figure\u003e\u003cp\u003e5. The pace of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e development continues to accelerate. In the run up to the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit, we published a discussion paper on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks and capabilities that showed these trends are likely to continue in line with companies building these technologies using more compute, more data, and increasingly efficient algorithms\u003csup id=\"fnref:15\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:15\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 15]\u003c/a\u003e\u003c/sup\u003e. Some frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e labs have stated their goal to build \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems that are more capable than humans at a range of tasks\u003csup id=\"fnref:16\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:16\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 16]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\u003cp\u003e6. Enhanced capabilities bring new opportunities. \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is already changing the way that we live and work. Workers using \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in sectors ranging from manufacturing to finance have reported improvements to their job enjoyment, performance, and health\u003csup id=\"fnref:17\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:17\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 17]\u003c/a\u003e\u003c/sup\u003e. \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e will change the tasks we do at work and the skills we need to do them well\u003csup id=\"fnref:18\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:18\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 18]\u003c/a\u003e\u003c/sup\u003e. Recent \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e developments are also changing how we spend our leisure time, with powerful \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems underpinning the chatbots and image generators that have become some of the fastest growing consumer applications in history\u003csup id=\"fnref:19\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:19\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 19]\u003c/a\u003e\u003c/sup\u003e. Highly capable \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is already transforming sectors, from helping us to conserve our wildlife\u003csup id=\"fnref:20\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:20\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 20]\u003c/a\u003e\u003c/sup\u003e to changing the ways that we identify and treat disease\u003csup id=\"fnref:21\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:21\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 21]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\u003cp\u003e7. However, more powerful \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e also poses new and amplified risks. For example, \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e chatbots may make false information more prominent\u003csup id=\"fnref:22\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:22\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 22]\u003c/a\u003e\u003c/sup\u003e or a highly capable \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e system may be misused to enable crime. For instance, a model designed for drug discovery could potentially be accessed maliciously to create harmful compounds\u003csup id=\"fnref:23\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:23\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 23]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\u003cp\u003e8. \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e may also fundamentally transform life in ways that are hard to predict. For instance, future agentic \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems may be able to pursue complex goals with limited human supervision, raising questions around how \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e agents remain attributable, ask for approval before taking action, and can be interrupted.\u003c/p\u003e\u003cp\u003e9. \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies present significant uncertainties that require an agile regulatory approach that supports innovation whilst adapting to address new risks. In this consultation response, we show how our flexible approach is already addressing key \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risks and how we are further strengthening this framework (section 5.1). We also set out initial thinking on potential new responsibilities on the developers of highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems alongside the voluntary commitments secured at the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit (section 5.2). In section 6, we provide a summary of the evidence we received to our consultation along with our formal response.\u003c/p\u003e" } }, { "@type": "Question", "name": "\n5. A regulatory framework to keep pace with a rapidly advancing technology", "url": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#a-regulatory-framework-to-keep-pace-with-a-rapidly-advancing-technology", "acceptedAnswer": { "@type": "Answer", "url": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#a-regulatory-framework-to-keep-pace-with-a-rapidly-advancing-technology", "text": "\u003cp\u003e10. In the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper, we proposed five cross-sectoral principles for existing regulators to interpret and apply within their remits in order to drive safe, responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation\u003csup id=\"fnref:24\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:24\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 24]\u003c/a\u003e\u003c/sup\u003e. These are:\u003c/p\u003e\u003cul\u003e\n \u003cli\u003eSafety, security and robustness.\u003c/li\u003e\n \u003cli\u003eAppropriate transparency and explainability.\u003c/li\u003e\n \u003cli\u003eFairness.\u003c/li\u003e\n \u003cli\u003eAccountability and governance.\u003c/li\u003e\n \u003cli\u003eContestability and redress.\u003c/li\u003e\n\u003c/ul\u003e\u003cp\u003e11. We welcome the strong support for these principles through the consultation. They are the foundation of our approach. We remain committed to a context-based approach that avoids unnecessary blanket rules that apply to all \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies, regardless of how they are used. This is the best way to ensure an agile approach that stands the test of time.\u003c/p\u003e\u003cp\u003e12. We are pleased to see how regulators are already independently implementing our principles. In the white paper we highlighted the importance of a central function to support regulator capabilities and coordination. We have made good progress establishing this function within the government. We set out below how we are further strengthening it, including new funding, in section 5.1. We also show how regulators and the government are addressing some of the most important issues facing us today.\u003c/p\u003e\u003cp\u003e13. In section 5.2, we set out some of the regulatory challenges posed by the rapid development of highly capable general-purpose systems; how we are currently tackling these through voluntary measures, including those agreed at the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit; and which additional responsibilities may be required in the future to address risks effectively.\u003c/p\u003e\u003ch3 id=\"delivering-a-proportionate-context-based-approach-to-regulate-the-use-of-ai\"\u003e\n\u003cspan class=\"number\"\u003e5.1. \u003c/span\u003e Delivering a proportionate, context-based approach to regulate the use of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e\n\u003c/h3\u003e\u003ch4 id=\"regulators-are-taking-active-steps-in-line-with-the-framework\"\u003e5.1.1. Regulators are taking active steps in line with the framework\u003c/h4\u003e\u003cp\u003e14. Since the publication of the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper, a number of regulators have set out work in line with our principles-based approach. For example, the Competition and Markets Authority (\u003cabbr title=\"Competition and Markets Authority\"\u003eCMA\u003c/abbr\u003e) published a review of foundation models to understand the opportunities and risks for competition and consumer protection\u003csup id=\"fnref:25\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:25\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 25]\u003c/a\u003e\u003c/sup\u003e. The Information Commissioner’s Office (\u003cabbr title=\"Information Commissioner's Office\"\u003eICO\u003c/abbr\u003e) updated guidance on how data protection laws apply to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems to include fairness\u003csup id=\"fnref:26\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:26\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 26]\u003c/a\u003e\u003c/sup\u003e. To ensure the safety of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, regulators such as the Office of Gas and Electricity Markets (\u003cabbr title=\"Office of Gas and Electricity Markets\"\u003eOfgem\u003c/abbr\u003e) and Civil Aviation Authority (\u003cabbr title=\"Civil Aviation Authority\"\u003eCAA\u003c/abbr\u003e) are working on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e strategies to be published later this year. This builds on regulator work that led the way on clarifying how existing frameworks apply to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks in their domain, such as the Medicines and Healthcare products Regulatory Agency (\u003cabbr title=\"Medicines and Healthcare products Regulatory Agency\"\u003eMHRA\u003c/abbr\u003e) Software and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e as a Medical Device Change Programme 2021 on requirements for software and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e used in medical devices\u003csup id=\"fnref:27\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:27\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 27]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\u003cp\u003e15. It is important that the public have full visibility of how regulators are incorporating the principles into their work. The government has written to a number of regulators impacted by \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e to ask them to publish an update outlining their strategic approach to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e by 30 April \u003csup id=\"fnref:28\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:28\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 28]\u003c/a\u003e\u003c/sup\u003e. We are encouraging regulators to include:\u003c/p\u003e\u003cul\u003e\n \u003cli\u003eAn outline of the steps they are taking in line with the expectations set out in the white paper.\u003c/li\u003e\n \u003cli\u003eAnalysis of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risks in the sectors and activities they regulate and the actions they are taking to address these.\u003c/li\u003e\n \u003cli\u003eAn explanation of their current capability to address \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e as compared with their assessment of requirements, and the actions they are taking to ensure they have the right structures and skills in place.\u003c/li\u003e\n \u003cli\u003eA forward look of plans and activities over the coming 12 months.\u003c/li\u003e\n\u003c/ul\u003e\u003cp\u003e16. When we published the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper, we proposed that the principles would be established on a non-statutory basis. Many consultation respondents noted the potential benefits of a statutory duty on regulators, but some acknowledged that implementing the regime on a non-statutory basis in the first instance would allow for important flexibilities. We think a non-statutory approach currently offers critical adaptability – especially while we are still establishing our approach – but we will keep this under review. Our decision will be informed in part by our review of the plans published by regulators, as set out above; our review of regulator powers, as set out below; and in line with our wider approach to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e legislation, such as the introduction of targeted binding measures (see section 5.2).\u003c/p\u003e\u003ch4 id=\"supporting-regulatory-capability-and-coordination\"\u003e5.1.2 Supporting regulatory capability and coordination\u003c/h4\u003e\u003cp\u003e17. The systemic changes driven by \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e demand a system-wide response – our individual regulators cannot successfully address the opportunities and risks presented by \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies within their remits by acting in isolation. In the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper, we proposed a new central function, established within government, to monitor and assess risks across the whole economy and support regulator coordination and clarity.\u003c/p\u003e\u003cp\u003e18. The proposal for a central function was widely welcomed by stakeholders who noted it is critical to the effective delivery of the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation framework. Many stressed that, without such a function, there is a risk of regulatory overlaps, gaps, and poor coordination as multiple regulators consider the impact of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in their domains.\u003c/p\u003e\u003cp\u003e19. We have already started to establish this function in a range of ways:\u003c/p\u003e\u003cp\u003ei. \u003cstrong\u003eRisk assessment\u003c/strong\u003e: We have recruited a new multidisciplinary team to undertake cross-sectoral risk monitoring within the Department for Science, Innovation and Technology (\u003cabbr title=\"Department for Science, Innovation and Technology\"\u003eDSIT\u003c/abbr\u003e), bringing together expertise in risk, regulation, and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e with backgrounds in data science, engineering, economics, and law. This team will provide continuous examination of cross-cutting \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks, including evaluating the effectiveness of interventions by both the government and regulators. In 2024, we will launch a targeted consultation on a cross-economy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risk register to ensure it comprehensively captures the range of risks. It will provide a single source of truth on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks which regulators, government departments, and external groups can use. It will also support government work to identify any risks that fall across or in between the remits of regulators so we can identify where there are gaps or existing regulation is ineffective and prioritise further action. In addition to the risk register, we are considering the added value of developing a risk management framework, similar to the one developed in the US by the National Institute of Standards and Technology (\u003cabbr title=\"National Institute of Standards and Technology\"\u003eNIST\u003c/abbr\u003e).\u003c/p\u003e\u003cp\u003eii. \u003cstrong\u003eRegulator capabilities\u003c/strong\u003e: Effective regulation relies on regulators having the right skills, tools, and expertise. While some regulators have been able to put the right expertise in place to address \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, others are less prepared. We are announcing £10 million for regulators to develop the capabilities and tools they need to adapt and respond to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. We are investing in regulators today to future-proof their capabilities for tomorrow. The funding will enable regulators to collaborate to create, adapt, and improve practical tools to address \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks and opportunities within and across their remits. It will enable regulators to carry out research and development to produce novel, actionable insights that will set the foundation of their approaches for years to come. We will work closely with regulators in the coming months to identify the most promising opportunities to leverage this funding. This builds on the recent announcement that the government will explore how to further support regulators to develop the specialist skills necessary to regulate emerging technologies, including options for increased flexibility on pay and conditions\u003csup id=\"fnref:29\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:29\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 29]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\u003cp\u003eiii. \u003cstrong\u003eRegulator powers\u003c/strong\u003e: We recognise the need to assess the existing powers and remits of the UK’s regulators to ensure they are equipped to address \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks and opportunities in their domains and implement the principles in a consistent and comprehensive way. We will, therefore, work with government departments and regulators to analyse and review potential gaps in existing regulatory powers and remits.\u003c/p\u003e\u003cp\u003eiv. \u003cstrong\u003eCoordination\u003c/strong\u003e: In the coming months we will formalise our regulator coordination activities. To support and guide this work, we will establish a steering committee with government representatives and key regulators to support knowledge exchange and coordination on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance by spring 2024. We continue to support regulatory coordination more widely, including working with bodies such as the Digital Regulation Cooperation Forum (\u003cabbr title=\"Digital Regulation Cooperation Forum\"\u003eDRCF\u003c/abbr\u003e). Today we have published new guidance for regulators to support them to interpret and apply our principles.\u003c/p\u003e\u003cp\u003ev. \u003cstrong\u003eResearch and innovation\u003c/strong\u003e: We are working closely with UK Research and Innovation  (\u003cabbr title=\"UK Research and Innovation\"\u003eUKRI\u003c/abbr\u003e) to ensure the government’s wider investments in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e \u003cabbr title=\"research and development\"\u003eR\u0026amp;D\u003c/abbr\u003e can support the government’s safety agenda. This includes a new commitment by \u003cabbr title=\"UK Research and Innovation\"\u003eUKRI\u003c/abbr\u003e to improve links between regulators and the skills, expertise, and activities supported by \u003cabbr title=\"UK Research and Innovation\"\u003eUKRI\u003c/abbr\u003e investments in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e research such as Responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e UK, the Trustworthy Autonomous Systems hub, the \u003cabbr title=\"UK Research and Innovation\"\u003eUKRI\u003c/abbr\u003e \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Centres for Doctoral Training, and the Alan Turing Institute. This will ensure the UK’s strength in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e research is fully utilised in our regulatory framework. This work builds on our previous commitment of £250 million through the \u003cabbr title=\"UK Research and Innovation\"\u003eUKRI\u003c/abbr\u003e Technology Missions Fund to secure the UK’s global leadership in critical technologies\u003csup id=\"fnref:30\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:30\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 30]\u003c/a\u003e\u003c/sup\u003e. \u003cabbr title=\"UK Research and Innovation\"\u003eUKRI\u003c/abbr\u003e is today announcing that £19 million of the Technology Missions Fund will support Phase 2 of the Accelerating Trustworthy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e competition, supporting 21 projects delivered through the Innovate UK BridgeAI programme, to accelerate the adoption of trusted and responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and machine learning.\u003c/p\u003e\u003cp\u003evi. \u003cstrong\u003eEase of compliance\u003c/strong\u003e: Regulation must work for innovators. We are supporting innovators and businesses to get new products to market safely and efficiently by funding a pilot multi-agency advisory service delivered by the \u003cabbr title=\"Digital Regulation Cooperation Forum\"\u003eDRCF\u003c/abbr\u003e\u003csup id=\"fnref:31\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:31\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 31]\u003c/a\u003e\u003c/sup\u003e. This will particularly help innovators navigate the legal and regulatory requirements they need to meet before launch. The online portal for the pilot 1. \u003cabbr title=\"Digital Regulation Cooperation Forum\"\u003eDRCF\u003c/abbr\u003e \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and Digital Hub and the application window are due to launch in the spring. Insights from the pilot will inform the implementation of our regulatory approach. Further details on the eligibility criteria for the support to be offered by the pilot have been published by the \u003cabbr title=\"Digital Regulation Cooperation Forum\"\u003eDRCF\u003c/abbr\u003e today alongside this consultation response.\u003c/p\u003e\u003cp\u003evii. \u003cstrong\u003ePublic trust\u003c/strong\u003e: We want businesses, consumers, and the public to have confidence in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies. We will build trust by continuing to support work on assurance techniques and technical standards. The UK \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Standards Hub, launched in 2022, provides practical tools and guides for businesses, organisations, and individuals to effectively use digital technical standards and participate in their development\u003csup id=\"fnref:32\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:32\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 32]\u003c/a\u003e\u003c/sup\u003e In 2023, the government collaborated with techUK to launch the Portfolio of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Assurance Techniques announced in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper\u003csup id=\"fnref:33\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:33\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 33]\u003c/a\u003e\u003c/sup\u003e. In spring 2024, we will publish an “Introduction to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e assurance” to further promote the value of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e assurance and help businesses and organisations build their understanding of the techniques for safe and trustworthy systems. Alongside this, we undertake regular research with the public to ensure the government’s approach to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is aligned with our wider values\u003csup id=\"fnref:34\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:34\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 34]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\u003cp\u003eviii. \u003cstrong\u003eMonitoring and Evaluation\u003c/strong\u003e: We are developing a monitoring and evaluation plan that allows us to continuously assess the effectiveness of our regime as \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies change. We will conduct a targeted consultation with a range of stakeholders on our proposed plan to assess the regulatory framework in spring 2024. As part of this, we will seek detailed views on our proposed metrics and data sources.\u003c/p\u003e\u003cp\u003e20. \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation will only work within a wider ecosystem that champions the industry. In 2023, the government committed over £1.5 billion to build public sector supercomputers, including the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Research Resource and an exascale computer. We are also working closely with the private sector to support investment, such as Microsoft’s announcement of £2.5 billion for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related data centres in November 2023. The £80 million investment in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e hubs that we are announcing today will enable \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e to evolve and tackle complex problems across applications from healthcare treatments to power-efficient electronics. The government is also conducting a wider review of the UK \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e supply chain to ensure we maintain our strategic advantage as a world leader in these technologies.\u003c/p\u003e\u003cp\u003e21. Finally, to drive coordinated action across government we have established lead \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Ministers across all departments to bring together work on risks and opportunities driven by \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in their sectors and to oversee implementation of frameworks and guidelines for public sector usage of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. We are also establishing a new Inter-Ministerial Group to drive effective coordination across government on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e issues. Further to this, we are strengthening the team working on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e within the \u003cabbr title=\"Department for Science, Innovation and Technology\"\u003eDSIT\u003c/abbr\u003e. In February 2023, we had a team of around 20 people working on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e issues. This had grown to over 160 across the newly established \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Policy Directorate and the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute by the end of 2023, with plans to expand to more than 270 people in 2024. In recognition of the fact that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is a top priority for the Secretary of State and has become central to the wider work of the department and government, we will no longer maintain the branding of a separate Office for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. Similarly, the Centre for Data Ethics and Innovation (\u003cabbr title=\"Centre for Data Ethics and Innovation\"\u003eCDEI\u003c/abbr\u003e) is changing its name to the Responsible Technology Adoption Unit to more accurately reflect its mission. The name highlights the directorate’s role in developing tools and techniques that enable responsible adoption of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in the private and public sectors, in support of the department’s central mission.\u003c/p\u003e\u003ch4 id=\"ai-governance-landscape\"\u003e\n\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance landscape\u003c/h4\u003e\u003cfigure class=\"image embedded\"\u003e\u003cdiv class=\"img\"\u003e\u003cimg src=\"https://assets.publishing.service.gov.uk/media/65c0b135c4319100141a4511/ai-governance-landscape.svg\" alt=\"\"\u003e\u003c/div\u003e\n\u003cfigcaption\u003e\u003cp\u003eAI regulation landscape.\u003c/p\u003e\u003c/figcaption\u003e\u003c/figure\u003e\u003cp\u003e\u003cabbr title=\"Department for Science, Innovation and Technology\"\u003eDSIT\u003c/abbr\u003e - the government department responsible for overall responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e policy, including regulation.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eImage 1\u003c/strong\u003e: A diagram of the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation landscape showing the relationships between the government, regulators, industry, and the wider ecosystem.\u003c/p\u003e\u003ch4 id=\"tackling-specific-risks\"\u003e5.1.3 Tackling specific risks\u003c/h4\u003e\u003cp\u003e22. There are three broad categories of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risk: societal harms; misuse risks; and autonomy risks\u003csup id=\"fnref:35\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:35\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 35]\u003c/a\u003e\u003c/sup\u003e. Below we outline examples of how the government and regulators are responding to specific risks in line with our principles. This summary illustrates the wide range of work already happening to ensure the benefits of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation can be realised safely and responsibly. It is not intended to be exhaustive or prioritise certain risks over others.\u003c/p\u003e\u003cp\u003e23. In addition to the work to address specific risks outlined below, we are today announcing £2 million of Arts and Humanities Research Council (\u003cabbr title=\"Arts and Humanities Research Council\"\u003eAHRC\u003c/abbr\u003e) funding to support translational research that will help to define responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e across sectors such as education, policing, and creative industries. These projects, part of the \u003cabbr title=\"Arts and Humanities Research Council\"\u003eAHRC\u003c/abbr\u003e’s Bridging Responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Divides (\u003cabbr title=\"Bridging Responsible AI Divides\"\u003eBRAID\u003c/abbr\u003e) work\u003csup id=\"fnref:36\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:36\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 36]\u003c/a\u003e\u003c/sup\u003e, will produce recommendations to inform future work in this area and demonstrate how the UK is at the forefront of embedding \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e across key sectors. In addition to the scoping projects, \u003cabbr title=\"Arts and Humanities Research Council\"\u003eAHRC\u003c/abbr\u003e are confirming a further £7.6 million to fund a second phase of the \u003cabbr title=\"Bridging Responsible AI Divides\"\u003eBRAID\u003c/abbr\u003e programme, extending activities to 2027/28. The next phase will include a new cohort of large-scale demonstrator projects, further rounds of \u003cabbr title=\"Bridging Responsible AI Divides\"\u003eBRAID\u003c/abbr\u003e Fellowships, and new professional \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e skills provisions, co-developed with industry and other partners.\u003c/p\u003e\u003ch5 id=\"societal-harms\"\u003eSocietal harms\u003c/h5\u003e\u003cp\u003e\u003cstrong\u003ePreparing UK workers for an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e enabled economy\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e24. \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is revolutionising the workplace. While the adoption of these technologies can bring new, higher quality jobs, it can also create and amplify a range of risks, such as workplace surveillance and discrimination in recruitment, that the government and regulators are already working to address. We want to harness the growth potential of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e but this must not be at the expense of employment rights and protections for workers. The UK’s robust system of legislation and enforcement for employment protections, including specialist labour tribunals, sets a strong foundation for workers. To ensure the use of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in \u003cabbr title=\"Human Resources\"\u003eHR\u003c/abbr\u003e and recruitment is safe, responsible, and fair, the Department for Science, Innovation and Technology (\u003cabbr title=\"Department for Science, Innovation and Technology\"\u003eDSIT\u003c/abbr\u003e)  will provide updated guidance in spring 2024.\u003c/p\u003e\u003cp\u003e25. Since 2018 we have funded a £290 million package of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e skills and talent initiatives to make sure that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e education and awareness is accessible across the UK. This includes funding 24 \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Centres for Doctoral Training which will train over 1,500 \u003cabbr title=\"Doctorate of Philosophy\"\u003ePhD\u003c/abbr\u003e students. We are also working with Innovate UK and the Alan Turing Institute to develop guidance that sets out the core \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e skills people need, from ‘\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e citizens’ to ‘\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e professionals’. We published draft guidance for public comment in November 2023 and we intend to publish a final version and a full skills framework in spring 2024\u003csup id=\"fnref:37\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:37\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 37]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\u003cp\u003e26. It is hard to predict, at this stage, exactly how the labour market will change due to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. Some sectors are concerned that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e will displace jobs through automation\u003csup id=\"fnref:38\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:38\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 38]\u003c/a\u003e\u003c/sup\u003e. The Department for Education (\u003cabbr title=\"Department for Education\"\u003eDfE\u003c/abbr\u003e) has published initial work on the impact of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e on UK jobs, sectors, qualifications, and training pathways\u003csup id=\"fnref:39\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:39\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 39]\u003c/a\u003e\u003c/sup\u003e. We can be confident that we will need new \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related skills through national qualifications and training provision. The government has invested £3.8 billion in higher and further education in this parliament to make the skills system employer-led and responsive to future needs. Along with \u003cabbr title=\"Department for Education\"\u003eDfE\u003c/abbr\u003e’s Apprenticeships\u003csup id=\"fnref:40\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:40\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 40]\u003c/a\u003e\u003c/sup\u003e and Skills Bootcamps\u003csup id=\"fnref:41\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:41\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 41]\u003c/a\u003e\u003c/sup\u003e, the new Lifelong Learning Entitlement reforms\u003csup id=\"fnref:42\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:42\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 42]\u003c/a\u003e\u003c/sup\u003e and Advanced British Standard\u003csup id=\"fnref:43\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:43\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 43]\u003c/a\u003e\u003c/sup\u003e will put academic and technical education in England on an equal footing and ensure our skills and education system is fit for the future.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eEnabling \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation and protecting intellectual property\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e27. The \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technology and creative sectors, as well as our media, are strongest when they work together in partnership. This government is committed to supporting these sectors so that they continue to flourish and are able to compete internationally. The Department for Culture, Media and Sport (\u003cabbr title=\"Department for Digital, Culture, Media and Sport\"\u003eDCMS\u003c/abbr\u003e) is working closely with publishers, the music industry, and other creative businesses to understand the impact of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e on these sectors, with a view to mitigating risks and capitalising on opportunities. Significant funding highlighted in the Creative Industries Sector Vision\u003csup id=\"fnref:44\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:44\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 44]\u003c/a\u003e\u003c/sup\u003e will help enable \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-based \u003cabbr title=\"research and development\"\u003eR\u0026amp;D\u003c/abbr\u003e and innovation in the creative industries.\u003c/p\u003e\u003cp\u003e28. Creative industries and media organisations have particular concerns regarding copyright protections in the era of generative \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. Creative industries and rights holders are concerned at the large-scale use of copyright protected content for training \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models and have called for assurance that their ability to retain autonomy and control over their valuable work will be protected. At the same time, \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e developers have emphasised that they need to be able to easily access a wide range of high-quality datasets to develop and train cutting-edge \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems in the UK.\u003c/p\u003e\u003cp\u003e29. The Intellectual Property Office (\u003cabbr title=\"Intellectual Property Office\"\u003eIPO\u003c/abbr\u003e) convened a working group made up of rights holders and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e developers on the interaction between copyright and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. The working group has provided a valuable forum for stakeholders to share their views. Unfortunately, it is now clear that the working group will not be able to agree an effective voluntary code.\u003c/p\u003e\u003cp\u003e30. \u003cabbr title=\"Department for Science, Innovation and Technology\"\u003eDSIT\u003c/abbr\u003e and \u003cabbr title=\"Department for Digital, Culture, Media and Sport\"\u003eDCMS\u003c/abbr\u003e ministers will now lead a period of engagement with the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and rights holder sectors, seeking to ensure the workability and effectiveness of an approach that allows the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and creative sectors to grow together in partnership. The government is committed to the growth of our world-leading creative industries and we recognise the importance of ensuring \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e development supports, rather than undermines, human creativity, innovation, and the provision of trustworthy information.\u003c/p\u003e\u003cp\u003e31. Our approach will need to be underpinned by trust and transparency between parties, with greater transparency from \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e developers in relation to data inputs and the attribution of outputs having an important role to play. Our work will therefore also include exploring mechanisms for providing greater transparency so that rights holders can better understand whether content they produce is used as an input into \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models. The government wants to work closely with rights holders and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e developers to deliver this. Critical to all of this work will also be close engagement with international counterparts who are also working to address these issues. We will soon set out further proposals on the way forward.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eProtecting UK citizens from \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related bias and discrimination\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e32. \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e has the potential to entrench bias and discrimination\u003csup id=\"fnref:45\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:45\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 45]\u003c/a\u003e\u003c/sup\u003e, possibly leading to unfairly negative outcomes for different populations across a range of sectors. For example, unaccounted for bias in an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-enabled automated decision making process could result in discriminatory outcomes against specific demographic characteristics in areas such as credit applications\u003csup id=\"fnref:46\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:46\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 46]\u003c/a\u003e\u003c/sup\u003e or recruitment\u003csup id=\"fnref:47\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:47\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 47]\u003c/a\u003e\u003c/sup\u003e. In line with our fairness principle, the department is working closely with the Equality and Human Rights Commission (\u003cabbr title=\"Equality and Human Rights Commission\"\u003eEHRC\u003c/abbr\u003e) and \u003cabbr title=\"Information Commissioner's Office\"\u003eICO\u003c/abbr\u003e to develop new solutions to address bias and discrimination in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems\u003csup id=\"fnref:48\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:48\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 48]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\u003cp\u003e33. Both regulators and public sector bodies are acting to address \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related bias and discrimination in their domains. The \u003cabbr title=\"Information Commissioner's Office\"\u003eICO\u003c/abbr\u003e has updated guidance on how our strong data protection laws apply to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems that process personal data to include fairness and has continued to hold organisations to account, for example through the issuing of enforcement notices\u003csup id=\"fnref:49\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:49\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 49]\u003c/a\u003e\u003c/sup\u003e. The Office of the Police Chief Scientific Adviser published a Covenant for Using \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in Policing\u003csup id=\"fnref:50\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:50\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 50]\u003c/a\u003e\u003c/sup\u003e which has been endorsed by the National Police Chiefs’ Council and should be given due regard by all developers and users of the technology in the sector.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eReforming data protection law to support innovation and privacy\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e34. Data is the foundation for modelling, training, and developing \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems. But it is critical to respect relevant individual rights and data protection principles should be complied with when processing personal data in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems. The \u003cabbr title=\"Information Commissioner's Office\"\u003eICO\u003c/abbr\u003e has demonstrated how they can use data protection law to hold organisations to account through regulatory action and public communications where \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems are processing personal data. The UK’s data protection framework, which is being reformed through the Data Protection and Digital Information Bill (\u003cabbr title=\"Data Protection and Digital Information Bill\"\u003eDPDI\u003c/abbr\u003e), will complement our pro-innovation, proportionate, and context-based approach to regulating \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.\u003c/p\u003e\u003cp\u003e35. Current rules on automated decision-making are confusing and complex, undermining confidence to develop and use innovative technologies. The \u003cabbr title=\"Data Protection and Digital Information Bill\"\u003eDPDI\u003c/abbr\u003e Bill will expand the lawful bases on which solely automated decisions that have significant effects on individuals can take place and provide a boost in confidence to organisations looking to use the technologies responsibly. It will continue to ensure that data subject rights are protected with safeguards in place. For example, data subjects will be provided with information on such decisions, have the opportunity to make representations, and can request human intervention or contest the decision. This will support innovation and reduce burdens on people and businesses, while maintaining data protection safeguards in line with the UK’s high standards of data protection.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eEnsuring \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e generated online content is trusted and safe\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e36. The government is committed to ensuring that people have access to accurate information and is supporting all efforts to promote verifiable sources to tackle the spread of false or misleading information. \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies are increasingly able to provide individuals with cheap ways to generate realistic content that can falsely portray people and events. Similarly, \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e may increase volumes of unintentionally false, biased, or harmful content\u003csup id=\"fnref:51\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:51\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 51]\u003c/a\u003e\u003c/sup\u003e. This may drive negative public perceptions of information quality and lower overall trust in information sources\u003csup id=\"fnref:52\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:52\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 52]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\u003cp\u003e37. We have published emerging practices to protect trust in online information including watermarking and output databases\u003csup id=\"fnref:53\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:53\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 53]\u003c/a\u003e\u003c/sup\u003e. We will shortly launch a call for evidence on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risks to trust in information to develop our understanding of this fast moving and nascent area of technological development, including possible mitigations. This will be aimed at researchers, academics, and civil society organisations with relevant expertise. We will also explore research into the wider and systemic impacts on the information ecosystem and potential solutions. We also continue to engage with news publishers and broadcasters, as vital channels for trustworthy and verifiable information, on the risks of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e to journalism.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eEnsuring \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e driven digital markets are competitive\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e38. \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is creating huge opportunities for innovation that benefits businesses and consumers across the economy. The markets for both the underlying \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies, such as foundation models, and products that use \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in new and innovative ways, are growing quickly.\u003c/p\u003e\u003cp\u003e39. Where these markets are competitive they will drive innovation and better outcomes for businesses and consumers. Successful firms will rightly grow and increase their market share, but it will be important that market power does not become entrenched by only a small number of firms.\u003c/p\u003e\u003cp\u003e40. The \u003cabbr title=\"Competition and Markets Authority\"\u003eCMA\u003c/abbr\u003e will take steps to ensure that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e markets work well for all. In September 2023, the regulator published an initial review into the market for foundation models\u003csup id=\"fnref:54\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:54\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 54]\u003c/a\u003e\u003c/sup\u003e. The report found that, while there will be many benefits to consumers from \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, these technologies could enable firms to gain or entrench market power. The Digital Markets, Competition and Consumers Bill, which is currently progressing through Parliament, will give the \u003cabbr title=\"Competition and Markets Authority\"\u003eCMA\u003c/abbr\u003e additional tools to identify and address any competition issues in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e markets and other digital markets affected by recent developments in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eEnsuring \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e best practice in the public sector\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e41. \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e poses enormous opportunities for transforming productivity in the public sector. The UK is already leading the way, ranked third in the Government \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Readiness Index.\u003csup id=\"fnref:55\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:55\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 55]\u003c/a\u003e\u003c/sup\u003e In November 2023, we announced that we are tripling the number of technical \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e engineers and developers within the Cabinet Office to create a new \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Incubator for the government. These experts will design and implement \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e solutions across government departments to drive improvements in public service delivery. This potential productivity improvement could, for example, save police up to 38 million hours per year and 750,000 hours every week\u003csup id=\"fnref:56\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:56\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 56]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\u003cp\u003e42. We are seizing the opportunities presented by \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e to deliver better public services including health, education, and transport. For example, last year the Department of Health and Social Care (\u003cabbr title=\"Department of Health and Social Care\"\u003eDHSC\u003c/abbr\u003e) and \u003cabbr title=\"National Health Service\"\u003eNHS\u003c/abbr\u003e launched the £21 million \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Diagnostic Fund to deploy these technologies in key, high demand areas such as chest X-rays and CT scans\u003csup id=\"fnref:57\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:57\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 57]\u003c/a\u003e\u003c/sup\u003e. \u003cabbr title=\"Department for Education\"\u003eDfE\u003c/abbr\u003e has been examining how to maximise the benefits of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in the education sector, including publishing a policy paper and a call for evidence on generative \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in education\u003csup id=\"fnref:58\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:58\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 58]\u003c/a\u003e\u003c/sup\u003e, as well as running a hackathons project to further understand possible use cases. The findings of the hackathons will be published in spring of this year. The Department for Transport (DfT) is focused on the new Automated Vehicles Bill, designed to put the UK at the forefront of regulation of self-driving technology and in a strong position to realise an estimated £42 billion share of the global self-driving market. DfT also plans to publish its first Transport \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Strategy in 2024, to help both the department and the wider sector to grasp the opportunities and risks presented by new \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e capabilities. Alongside this, the department continues to fund innovative Small and Medium sized Enterprises (\u003cabbr title=\"small and medium-sized enterprises\"\u003eSMEs\u003c/abbr\u003e) through its Transport Research and Innovation Grants scheme to support the next generation of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e tools and applications as well as trialling \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e to support fraud identification in its grant-making processes.\u003c/p\u003e\u003cp\u003e43. Cabinet Office (\u003cabbr title=\"Cabinet Office\"\u003eCO\u003c/abbr\u003e) is leading on establishing the necessary underpinnings to drive \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e adoption across the public sector, by improving digital infrastructure and access to data sets, and developing centralised standards. The government is also using the procurement power of the public sector to drive responsible and safe \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation. The Central Digital and Data Office (\u003cabbr title=\"Central Digital and Data Office\"\u003eCDDO\u003c/abbr\u003e) has published guidance on the procurement and use of generative \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e for the UK government\u003csup id=\"fnref:59\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:59\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 59]\u003c/a\u003e\u003c/sup\u003e. Later this year, \u003cabbr title=\"Department for Science, Innovation and Technology\"\u003eDSIT\u003c/abbr\u003e will launch the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Management Essentials scheme, setting a minimum good practice standard for companies selling \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e products and services. We will consult on introducing this as a mandatory requirement for public sector procurement, using purchasing power to drive responsible innovation in the broader economy.\u003c/p\u003e\u003cp\u003e44. This builds on the Algorithmic Transparency Recording Standard (\u003cabbr title=\"Algorithmic Transparency Recording Standard\"\u003eATRS\u003c/abbr\u003e), which established a standardised way for public sector organisations to proactively publish information about how and why they are using algorithmic methods in decision-making. Following a successful pilot of the standard, and publication of an approved cross-government version last year, we will now be making use of the \u003cabbr title=\"Algorithmic Transparency Recording Standard\"\u003eATRS\u003c/abbr\u003e a requirement for all government departments and plan to expand this across the broader public sector over time.\u003c/p\u003e\u003cp\u003e45. To inform the secure use of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e across government, the public sector, and beyond, the National Cyber Security Centre (\u003cabbr title=\"National Cyber Security Centre\"\u003eNCSC\u003c/abbr\u003e) has published a range of guidance products on the cyber security considerations around using and developing \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e\u003csup id=\"fnref:60\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:60\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 60]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\u003ch5 id=\"misuse-risks\"\u003eMisuse risks\u003c/h5\u003e\u003cp\u003e\u003cstrong\u003eSafeguarding democracy from electoral interference\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e46. The government is committed to strengthening the integrity of elections to ensure that our democracy remains secure, modern, transparent, and fair. \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e has the potential to increase the reach of actors spreading disinformation online, target new audiences more effectively, and generate new types of content that are more difficult to detect\u003csup id=\"fnref:61\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:61\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 61]\u003c/a\u003e\u003c/sup\u003e. Our Defending Democracy Taskforce is helping to reduce the threat of foreign interference in our democracy by bringing together a wide range of expertise across government, the intelligence community, and industry. In 2024, the Taskforce will be increasing its engagement with partners, collaborating with devolved governments, the police, local authorities, tech companies, and international partners.\u003c/p\u003e\u003cp\u003e47. We will always respond firmly to any threats to the UK’s democracy. The Elections Act 2022 introduced the new digital imprints regime, which will increase the transparency of digital political advertising (including \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-generated material), by requiring those promoting eligible digital campaigning material targeted at the UK electorate to include an imprint with their name and address. This will empower voters to know who is promoting political material online and on whose behalf. The Elections Act 2022 also revised the offence of undue influence. This will better protect voters from improper influences to vote in a particular way, or to not vote at all, and includes activities that deceive a person in relation to the administration of an election (such as the date of an electoral event or the location of a polling station).\u003c/p\u003e\u003cp\u003e48. The Online Safety Act 2023 will capture specific activity aimed at disrupting elections where it is a criminal offence in scope of the regulatory framework. This includes content that contains incitement to violence against electoral candidates and public figures, and the offence of undue influence. The foreign interference offence from the National Security Act 2023 has been added to the Online Safety Act as a “priority offence”, putting new responsibilities on online service providers and capturing attempts by foreign state actors to manipulate our information environment and undermine our democratic, political, and legal processes (including elections). The Online Safety Act has also updated \u003cabbr title=\"Office of Communications\"\u003eOfcom\u003c/abbr\u003e’s statutory media literacy duty, requiring the regulator to heighten the public’s awareness of, and resilience to, misinformation and disinformation online.\u003c/p\u003e\u003cp\u003e49. We will consider the tools available to verify election-related content. This could include using watermarks to give people confidence in the content they are viewing. It is not just the government that needs to act. We will continue to work with tech companies to ensure that it is possible to report and remove fakes quickly. Building on discussions at the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit, we are collaborating with international and industry partners to address the shared risk of election interference.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003ePreventing the misuse of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e50. \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e capabilities may be used maliciously, for example, to perform cyberattacks or design weapons\u003csup id=\"fnref:62\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:62\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 62]\u003c/a\u003e\u003c/sup\u003e. Developments in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e can amplify existing risks by enabling less sophisticated threat actors to carry out more substantial attacks at a larger scale\u003csup id=\"fnref:63\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:63\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 63]\u003c/a\u003e\u003c/sup\u003e. We are working with industry, academia, and international partners to find proportionate, practical mitigations to these risks. The 2023 refreshed Biological Security Strategy will ensure that by 2030 the UK is resilient to a spectrum of biological risks and a world leader in responsible innovation\u003csup id=\"fnref:64\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:64\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 64]\u003c/a\u003e\u003c/sup\u003e. As set out in the National Vision for Engineering Biology, the government has identified screening of synthetic DNA as a responsible innovation policy priority for 2024\u003csup id=\"fnref:65\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:65\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 65]\u003c/a\u003e\u003c/sup\u003e. Prioritising this will allow us to continue reaping the economic rewards of engineering biology in the UK whilst improving the safety of the supply chain.\u003c/p\u003e\u003cp\u003e51. Some of the risks presented by \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems are manifesting today as these technologies are misused to increase the scale, speed, and success of criminal offences. As discussed above, \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e can provide users with increasing capability to produce false or misleading content. This can include material that constitutes a criminal offence such as fraud, online child sexual abuse, and intimate image abuse. The government has already moved to address some of these issues in the Online Safety Act 2023. Some \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies could be misused to commit identity-related fraud, such as producing false documentation used for immigration purposes. These capabilities present potential risks related to fraudulent access to public funds.\u003c/p\u003e\u003cp\u003e52. In order to address the potential criminal use of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, we are reviewing the extent to which existing criminal law provides coverage of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-enabled offending and harmful behaviour. \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e may also present systemic risks to police capacity, institutional trust, and the evidential process. The government will make amendments to existing legal frameworks as required in order to protect law and order. \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e also poses more opportunities for law enforcement to become more efficient at detecting and preventing crime. As such, these technologies may help mitigate some of the risks of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-enabled criminal offences. For example, we are investing in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models that allow police to detect and categorise the severity of child abuse images more effectively. We are also exploring how \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e might enable officers to redact large amounts of text evidence more quickly.\u003c/p\u003e\u003cp\u003e53. To help organisations develop and use \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e securely, the \u003cabbr title=\"National Cyber Security Centre\"\u003eNCSC\u003c/abbr\u003e published guidelines for secure \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e system development in November 2023. The government is now looking to build on this and other important publications by releasing a call for views in spring 2024 to obtain further input on our next steps in securing \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models, including a potential Code of Practice for cyber security of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, based on \u003cabbr title=\"National Cyber Security Centre\"\u003eNCSC\u003c/abbr\u003e’s guidelines. International collaboration in this area is vital if we are to see meaningful change to the security of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models, and we will be exploring ways to promote international alignment, such as via international standards.\u003c/p\u003e\u003cp\u003e54. This builds on our work to secure personal devices and critical infrastructure. The security regime in the Product Security and Telecommunications Infrastructure (“\u003cabbr title=\"Product Security and Telecommunications Infrastructure\"\u003ePSTI\u003c/abbr\u003e”) Act, scheduled to come into effect in 2024, will require manufacturers of consumer connectable products, such as \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-enabled smart speakers, to comply with minimum security requirements underpinned by the secure by design principle. This means no consumer connectable products in scope of the regime can be made available to UK customers unless the manufacturer has minimum security measures in place covering the product’s hardware and software, and, where appropriate, associated \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e solutions. Beyond this, the National Protective Security Authority (\u003cabbr title=\"National Protective Security Authority\"\u003eNPSA\u003c/abbr\u003e) conducts research to understand how \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e can, and will, enhance physical and personnel security. \u003cabbr title=\"National Protective Security Authority\"\u003eNPSA\u003c/abbr\u003e advises a wide range of organisations, including critical national infrastructure companies, on how to address \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related threats and delivers campaigns to help protect valuable \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related intellectual property for emerging technology companies.\u003c/p\u003e\u003ch5 id=\"autonomy-risks\"\u003eAutonomy risks\u003c/h5\u003e\u003cp\u003e55. In our discussion paper on frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e capabilities and risks\u003csup id=\"fnref:66\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:66\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 66]\u003c/a\u003e\u003c/sup\u003e, we outlined potential future risks linked to the increasing autonomy of advanced \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems. Some experts are concerned that, as \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems become more capable across a wider range of tasks, humans will increasingly rely on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e to make important decisions. Some also believe that, in the future, agentic \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems may have the capabilities to actively reduce human control and increase their own influence. New research on the advancing capabilities of agentic \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e demonstrates that we may need to consider potential new measures to address emerging risks as the foundational \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies that underpin a range of applications continue to develop\u003csup id=\"fnref:67\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:67\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 67]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\u003cp\u003e56. In section 5.2, we set out proposals for new future responsibilities on developers of highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. While the likelihood of autonomy risks is debated, we believe that our proposals introduce accountability, governance, and oversight for these developers as well as testing and benchmarking powerful \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems to address these risks now and in the future. In particular, the testing conducted by the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute will identify systems with potentially hazardous capabilities (see sections 5.2 and 5.3 for more details on the role of the Institute). Testing has already begun and will increase in pace over the following months. These initial steps build the UK’s technical capability to assess and respond to emerging \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks, ensuring our resilience to future technological developments.\u003c/p\u003e\u003ch3 id=\"examining-the-case-for-new-responsibilities-for-developers-of-highly-capable-general-purpose-ai-systems\"\u003e\n\u003cspan class=\"number\"\u003e5.2. \u003c/span\u003e Examining the case for new responsibilities for developers of highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems\u003c/h3\u003e\u003cp\u003e57. As noted above, we are seeing rapid progress in the performance of highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems. We expect this to continue as organisations develop them with more compute, more data, and more efficient algorithms. Developers do not always know which capabilities a model may exhibit before testing\u003csup id=\"fnref:68\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:68\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 68]\u003c/a\u003e\u003c/sup\u003e. Some companies have publicly stated their goal to build \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems that are more capable than humans at a range of tasks\u003csup id=\"fnref:69\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:69\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 69]\u003c/a\u003e\u003c/sup\u003e. With agentic \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e capabilities on the horizon, we expect further transformative changes to our societies\u003csup id=\"fnref:70\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:70\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 70]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\u003cp\u003e58. The Prime Minister set out the government’s approach to managing risk at the frontier of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e development in October 2023. He stated: “My vision, and our ultimate goal, should be to work towards a more international approach to safety, where we collaborate with partners to ensure \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems are safe before they are released\u003csup id=\"fnref:71\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:71\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 71]\u003c/a\u003e\u003c/sup\u003e.”\u003c/p\u003e\u003cp\u003e59. We set out below how the UK has led the way with a technical approach, securing voluntary agreements on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety with key countries and companies. The new \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute will work with its partners to test the most powerful new \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems pre- and post- deployment. As the Prime Minister set out, we will not “rush to regulate” and potentially implement the wrong measures that may insufficiently balance addressing risks and supporting innovation.\u003c/p\u003e\u003cp\u003e60. Clearly, if the exponential growth of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e capabilities continues, and if – as we think could be the case – voluntary measures are deemed incommensurate to the risk, countries will want some binding measures to keep the public safe. Some countries, such as the United States are beginning to explore this through mandatory reporting requirements for the most powerful systems. We have seen significant interventions from leading figures in industry, science, and civil society, highlighting how governments should consider responding to the development\u003csup id=\"fnref:72\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:72\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 72]\u003c/a\u003e\u003c/sup\u003e and we welcome continued close collaboration with these expert voices.\u003c/p\u003e\u003cp\u003e61. The UK will continue to lead the conversation on effective \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance. In the section below, we set out some of the key questions that countries will have to grapple with when deciding how best to manage the risks of highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems, such as how to allocate liability across the supply chain and negotiate the open release of the most powerful systems. We will continue to discuss these questions with civil society, industry, and international partners to prepare for the future.\u003c/p\u003e\u003cdiv class=\"call-to-action\"\u003e\n \u003ch4 id=\"box-2-what-do-we-mean-by-highly-capable-general-purpose-ai-systems\"\u003eBox 2: What do we mean by highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems?\u003c/h4\u003e\n\n \u003cp\u003eIn the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper, we defined “foundation models” as “a type of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e model that is trained on a vast quantity of data and is adaptable for use on a wide range of tasks. Foundation models can be used as a base for building more specific \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models\u003csup id=\"fnref:73\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:73\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 73]\u003c/a\u003e\u003c/sup\u003e.”\u003c/p\u003e\n\n \u003cp\u003eFor the purposes of the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit, the UK defined “frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e” as highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models.\u003c/p\u003e\n\n \u003cp\u003eToday, this can include the cutting-edge foundation models that underpin consumer facing applications. However, it is important to note that, both today and in the future, highly capable \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems could be underpinned by another technology.\u003c/p\u003e\n\n \u003cp\u003eIn this consultation response, we focus our discussion on future responsibilities for the developers of highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems. Developers of these systems currently face the least clear legal responsibilities. The systems have the least coverage by existing regulation while presenting some of the greatest potential risk. This means some of those risks may not be addressed effectively. In the future, our regulatory approach might need to also allocate new responsibilities to developers of highly capable narrow systems as the framework continues to adapt to reflect new technological developments, different risks, or further analysis of accountability across the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e life cycle.\u003c/p\u003e\n\u003c/div\u003e\u003ch4 id=\"the-regulatory-challenges-of-highly-capable-general-purpose-ai\"\u003e5.2.1. The regulatory challenges of highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e\n\u003c/h4\u003e\u003cp\u003e62. The \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper outlined a regulatory approach designed to adapt and keep pace with the rapid developments in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technology. For the large majority of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems, our view is still that it is more effective to focus on how \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is used within a specific context than to regulate specific technologies. This is because the level of risk will be determined by where and how \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is used.\u003c/p\u003e\u003cp\u003e63. However, some highly capable \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems can present substantial risks. Risk may increase when a highly capable system is general-purpose and can be used in a wide range of applications across different sectors. If a general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e system presents a risk of harm, this could mean that multiple sectors or applications could be at risk. This means that a single feature or flaw in one model might result in multiple harms across the whole economy. For example, if an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e system is used to underpin complex automated processes in both healthcare and recruitment, but the model’s outputs demonstrate bias in a way that is not sufficiently transparent or with impacts that are not adequately mitigated, this could result in discriminatory practices in these different services.\u003c/p\u003e\u003cp\u003e64. Highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems challenge a context-based approach to regulation as some of the risks that they contribute to may not be effectively mitigated by existing regulation. For example, the cross-sectoral impact of these systems may prevent harms from being sufficiently addressed. Even though some regulators can enforce existing laws against the developers of the most capable general-purpose systems within their current remits\u003csup id=\"fnref:74\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:74\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 74]\u003c/a\u003e\u003c/sup\u003e, the wide range of potential uses means that general-purpose systems do not currently fit neatly within the remit of any one regulator, potentially leaving risks without effective mitigations\u003csup id=\"fnref:75\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:75\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 75]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\u003cp\u003e65. While some regulators demonstrate advanced approaches to addressing \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e within their remits, many of our current legal frameworks and regulator remits may not effectively mitigate the risks posed by highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems. Many regulators in the UK can struggle to enforce existing rules on those actors designing, training, and developing the most powerful general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems. Similarly, it is not always clear how existing rules can be applied to effectively address the risks that highly capable general-purpose models can present. Existing rules and laws are frequently applied to the deployment or application level of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, but the organisations deploying or using these systems may not be well placed to identify, assess, or mitigate the risks they can present. If this is the case, new responsibilities on the developers of highly capable general-purpose models may more effectively address risks.\u003c/p\u003e\u003cp\u003e66. Our ongoing work analysing life cycle accountability for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, outlined in the white paper, may eventually need to consider the role of other actors across the value chain, such as data or cloud hosting providers, to determine how legal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e may be distributed most fairly and effectively. This analysis will also consider how the unpredictable way future capabilities and risks may emerge could also expose further gaps in the regulatory landscape.\u003c/p\u003e\u003cdiv class=\"call-to-action\"\u003e\n \u003cp\u003e\u003cstrong\u003eCase study 1: Liability as a barrier to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e adoption in the UK\u003c/strong\u003e\u003c/p\u003e\n\n \u003cp\u003e“Count Your Pennies Ltd”, a fictional accountancy firm, purchases an “off the shelf” \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e recruitment tool developed by a fictional UK company called “Quantum Talent Technologies”. The tool automatically shortlists candidates based on their application forms.\u003c/p\u003e\n\n \u003cp\u003eOne fictional candidate, Ms Smith, queries why her application was rejected for a certain position given her clear suitability for the role. After receiving an unsatisfactory response from the recruiting manager, she files a discrimination claim. Through the investigation, it becomes clear that the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e tool is discriminatory. It was built using a powerful foundation model that was developed by a non-UK company and trained on biased historic employment data.\u003c/p\u003e\n\n \u003cp\u003eIt’s common for the law to allocate liability to the last actor in the chain (in this case, “Count Your Pennies Ltd”). In limited circumstances, the law may also allocate liability to the actor immediately above in the supply chain (in this case, “Quantum Talent Technologies”)\u003csup id=\"fnref:76\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:76\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 76]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\n\n \u003cp\u003eFor example, it can be difficult for equality law – which is the statutory framework designed to legally protect people against discrimination in the workplace and in wider society\u003csup id=\"fnref:77\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:77\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 77]\u003c/a\u003e\u003c/sup\u003e – to allocate liability to anyone other than the end deployer. This could ultimately lead to harmful outcomes (if the actors most able to address risks and harms are not incentivised or held accountable) and undermine \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e adoption and dampen innovation across the UK economy. We will continue to analyse challenges such as these as part of our ongoing policy work on life cycle accountability for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.\u003c/p\u003e\n\u003c/div\u003e\u003cp\u003e67. While highly capable narrow \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems are in scope of the regulatory framework for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, these systems may require a different set of interventions if they present potentially dangerous capabilities. Narrow systems are more likely than general-purpose systems to be subject to effective regulation within the remit of an existing regulator. We will continue to gather evidence on whether the specialised nature of highly capable narrow systems demands a different approach to general-purpose systems\u003c/p\u003e\u003ch4 id=\"the-role-of-voluntary-measures-in-initially-building-an-effective-and-targeted-regulatory-approach\"\u003e5.2.2. The role of voluntary measures in initially building an effective and targeted regulatory approach\u003c/h4\u003e\u003cp\u003e68. We have already started to make the world safer today by securing commitments from leading \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e companies on voluntary measures. Building on voluntary commitments brokered by the White House, the Secretary of State for Science, Innovation and Technology wrote to seven frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e companies prior to the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit requesting that they publish their safety policies. All seven companies published their policies before the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit, increasing transparency within the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e community and encouraging safe industry practice\u003csup id=\"fnref:78\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:78\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 78]\u003c/a\u003e\u003c/sup\u003e. We also published a report on emerging processes for frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety to inform the future development of safety policies (see Box 3)\u003csup id=\"fnref:79\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:79\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 79]\u003c/a\u003e\u003c/sup\u003e. In 2024, we will encourage \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e companies to develop their \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety and responsible capability scaling policies\u003csup id=\"fnref:80\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:80\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 80]\u003c/a\u003e\u003c/sup\u003e. As part of this work, we will update our emerging processes guide by the end of the year.\u003c/p\u003e\u003cdiv class=\"call-to-action\"\u003e\n \u003ch4 id=\"box-3-emerging-processes-for-frontier-ai-safety\"\u003eBox 3: Emerging Processes for Frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety\u003c/h4\u003e\n\n \u003cp\u003eAhead of the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit, the UK government outlined a set of emerging safety processes to provide information to companies on how they can ensure and maintain the safety of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies.\u003c/p\u003e\n\n \u003cp\u003eThe document covers nine emerging processes:\u003c/p\u003e\n\n \u003cp\u003e1. Responsible Capability Scaling - a framework for managing risk as organisations scale the capability of frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems, enabling companies to prepare for potential future, more dangerous \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks before they occur.\u003c/p\u003e\n\n \u003cp\u003e2. Model Evaluations and Red Teaming - methods to assess the risks \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems pose and inform better decisions about training, securing, and deploying them.\u003c/p\u003e\n\n \u003cp\u003e3. Model Reporting and Information Sharing - practices that increase government visibility of frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e development and deployment and enable users to make well-informed choices about whether and how to use \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems.\u003c/p\u003e\n\n \u003cp\u003e4. Security Controls including Securing Model Weights - measures such as cyber security and other security controls that underpin \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e system security.\u003c/p\u003e\n\n \u003cp\u003e5. Reporting Structure for Vulnerabilities - a process to enable outsiders to identify safety and security issues in an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e system.\u003c/p\u003e\n\n \u003cp\u003e6. Identifiers of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-generated Material - tools to mitigate the creation and distribution of deceptive \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-generated content by providing information about whether content has been \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e generated or modified.\u003c/p\u003e\n\n \u003cp\u003e7. Prioritising Research on Risks Posed by \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e - research processes to identify and address the emerging risks posed by frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.\u003c/p\u003e\n\n \u003cp\u003e8. Preventing and Monitoring Model Misuse - practices to identify and prevent intentional misuse of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems.\u003c/p\u003e\n\n \u003cp\u003e9. Data Input Controls and Audits - measures to identify and manage training data that is likely to increase the dangerous capabilities their frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems possess, and the risks they pose.\u003c/p\u003e\n\n \u003cp\u003eThe document consolidated emerging thinking in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety from research institutes and academia, companies, and civil society, who the UK government collaborated and engaged with throughout its development. \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety is an ongoing project and the processes and practices will continue to evolve through research and dialogue between governments and the broader \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e ecosystem. The document provides a useful starting point for future frameworks for action both in the UK and globally.\u003c/p\u003e\n\u003c/div\u003e\u003cp\u003e69. Alongside these voluntary measures, at the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit, governments and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e companies agreed that both parties have a crucial role to play in testing the next generation of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models, to ensure \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety – both before and after models are deployed. In the UK, the newly established \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute (see Box 4) leads this work. Leading \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e tech companies have pledged to provide the Institute with priority access to their systems. The Institute has already begun testing, and is committed to doing so in partnership with other countries and their respective safety institutes. We will shortly provide an update on the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute’s approach to evaluations. Our assessment of the capabilities and risks of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e will also be underpinned by a new International Report on the Science of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety\u003csup id=\"fnref:81\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:81\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 81]\u003c/a\u003e\u003c/sup\u003e, chaired by leading \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e pioneer Yoshua Bengio (see paragraph 87).\u003c/p\u003e\u003cdiv class=\"call-to-action\"\u003e\n \u003ch4 id=\"box-4-the-ai-safety-institute-aisi\"\u003eBox 4: The \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute (AISI)\u003c/h4\u003e\n\n \u003cp\u003eAt present, frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e developers are building powerful systems that outpace the ability of government and regulators to make them safe. As such, the government’s first challenge is one of knowledge: we do not fully understand what the most powerful systems are capable of and we urgently need to plug that gap. This will be the task of the new \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute. It will advance the world’s knowledge of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety by carefully examining, evaluating, and testing new frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems. In addition, it will research new techniques for understanding and mitigating \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risk, and conduct fundamental research on how to keep people safe in the face of fast and unpredictable progress in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.\u003c/p\u003e\n\n \u003cp\u003eThe \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute’s work will be fundamental to informing the UK’s regulatory framework. It will provide foundational insights to our governance regime and help ensure that the UK takes an evidence-based, proportionate approach to regulating the risks of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. It will initially perform three core functions:\u003c/p\u003e\n\n \u003cul\u003e\n \u003cli\u003e\n\u003cstrong\u003eDevelop and conduct evaluations on advanced \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems\u003c/strong\u003e, aiming to characterise safety-relevant capabilities, understand the safety and security of systems, and assess their societal impacts.\u003c/li\u003e\n \u003cli\u003e\n\u003cstrong\u003eDrive foundational \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety research\u003c/strong\u003e. The Institute’s research will support short and long-term \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance. It will ensure the UK’s iterative regulatory framework for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is informed by the latest expertise and lay the foundation for technically grounded international governance of advanced \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. Projects will range from rapid development of tools to inform governance, to exploratory \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety research which may be underexplored by industry.\u003c/li\u003e\n \u003cli\u003e\n\u003cstrong\u003eFacilitate information exchange\u003c/strong\u003e, including by establishing – on a voluntary basis and subject to existing privacy and data regulation – clear information-sharing channels between the Institute and other national and international actors, such as policymakers, international partners, private companies, academia, civil society, and the broader public.\u003c/li\u003e\n \u003c/ul\u003e\n\n \u003cp\u003eThe goal of the Institute’s evaluations will not be to designate any particular \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e system as “safe”; it is not clear that available techniques could justify such a definitive determination. The \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute is not a regulator; its role is to develop the technical expertise to understand the capabilities and risks of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems, informing the government’s broader actions. Nevertheless, we expect progress in system evaluations to enable better informed decision making by governments and companies and act as an early warning system for some of the most concerning risks. If the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute identifies a potentially dangerous capability through its evaluation of advanced \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems, the Institute will, where appropriate, address risks by engaging the developer on suitable safety mitigations and collaborating with the government’s established \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risk management and regulatory architecture.\u003c/p\u003e\n\n \u003cp\u003eThe Institute is focused on the most advanced current \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e capabilities and any future developments. It will consider open source systems as well as those deployed with various forms of access controls.\u003c/p\u003e\n\u003c/div\u003e\u003cp\u003e70. These voluntary actions allow us to test and learn what works in order to adapt our regulatory approach. We will strengthen our technical understanding to build wider consensus on key interventions, such as whether there should be conditions in which it would be right to pause the development of specific systems, as some have proposed.\u003c/p\u003e\u003cp\u003e71. While voluntary measures help us make \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safer now, the intense competition between companies to release ever-more-capable systems means we will need to remain highly vigilant to meaningful compliance, accountability, and effective risk mitigation. It may be the case that commercial incentives are not always aligned with the public good. If the market evolves such that there are a larger number of firms that are building highly capable systems, the governance of voluntary approaches will be much harder\u003csup id=\"fnref:82\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:82\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 82]\u003c/a\u003e\u003c/sup\u003e. It will also be increasingly important to ensure the right accountability mechanisms and corporate governance frameworks are in place for companies building the most powerful systems.\u003c/p\u003e\u003ch4 id=\"the-case-for-future-binding-measures\"\u003e5.2.3. The case for future binding measures\u003c/h4\u003e\u003cp\u003e72. The section above highlights how the context-based approach may miss significant risks posed by highly capable general-purpose systems and leave the developers of those systems unaccountable. Whilst voluntary measures are a useful tool to address risks today, we anticipate that all jurisdictions will, in time, want to place targeted mandatory interventions on the design, development, and deployment of such systems to ensure risks are adequately addressed.\u003c/p\u003e\u003ch4 id=\"foundation-model-supply-chain\"\u003eFoundation model supply chain\u003c/h4\u003e\u003cfigure class=\"image embedded\"\u003e\u003cdiv class=\"img\"\u003e\u003cimg src=\"https://assets.publishing.service.gov.uk/media/65afa2a1bc0de30013187327/foundation_model_supply_chain.svg\" alt=\"\"\u003e\u003c/div\u003e\n\u003cfigcaption\u003e\u003cp\u003eFoundation model supply chain.\u003c/p\u003e\u003c/figcaption\u003e\u003c/figure\u003e\u003cp\u003eNote: This is one possible model (there will not always be a separate or single company at each layer).\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eImage 2\u003c/strong\u003e: A diagram of the foundation model supply chain taken from the Ada Lovelace Institute’s ‘\u003ca rel=\"external\" href=\"https://www.adalovelaceinstitute.org/resource/foundation-models-explainer/\" class=\"govuk-link\"\u003eFoundation Models Explainer\u003c/a\u003e’.\u003c/p\u003e\u003cp\u003eWhile there are many different ways to understand and describe the often complex life cycles of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies, this diagram illustrates that our proposed future measures would be clearly targeted at the small number of companies that work in the foundation model developer layer building highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.\u003c/p\u003e\u003cp\u003e73. Predicting which systems are capable enough to lead to significant risk is not straightforward. In line with our proportionate approach, any future regulation would be targeted at the small number of developers of the most powerful general-purpose systems. We propose to do this by establishing dynamic thresholds that can quickly respond to advances in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e development. Our preliminary analysis indicates that initial thresholds could be based on forecasts of capabilities using a combination of two proxies: compute (i.e. the amount of compute used to train the model) and capability benchmarking (i.e. assessing capabilities in certain risk areas to identify where we think high capabilities result in high risk). At least for the time being, the combination of these proxies can predict \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e capabilities reasonably well, however there might need to be a range of thresholds.\u003c/p\u003e\u003cp\u003e74. Any new obligations would ensure that the developers of the in-scope systems adhere to the principles set out in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper including safety, security, transparency, fairness, and accountability. This could include transparency measures (for example, relating to the data that systems are trained on); risk management, accountability, and corporate governance related obligations; or actions to address potential harms, such as those caused by misuse or unfair bias before or after training.\u003c/p\u003e\u003cp\u003e75. The open release of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e has, overall, been beneficial for innovation, transparency, and accountability. A degree of openness in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is, and will continue to be, critical to scientific progress, and we recognise that openness is core to our society and culture. However, while we are committed to defending the value of openness, we note that there is a balance to strike as we seek to mitigate potential risks. In this regard, we see an emerging consensus on the need to explore pre-deployment capability testing and risk assessment for the most powerful \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems, including where systems might be released openly. Pre-deployment testing could inform the deployment options available for a model and change the risk prevention steps required of organisations prior to the model’s release. Recognising the complexity of the debate, we are working closely with the open source community and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e developers to understand their needs. Our engagement with those developing and using \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models that are highly capable, general-purpose, and open access will allow us to explore the need for nuanced and targeted policy options that minimise any negative impacts on valuable open source activity, whilst mitigating risks.\u003c/p\u003e\u003cp\u003e76. The challenges posed by \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e will ultimately require legislative action in every country once understanding of risk has matured. Introducing binding measures too soon, even if highly targeted, could fail to effectively address risks, quickly become out of date, or stifle innovation and prevent people from across the UK from benefiting from \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in line with the adaptable approach set out in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper, the government would consider introducing binding measures if we determined that existing mitigations were no longer adequate and we had identified interventions that would mitigate risks in a targeted way. As with any decision to legislate, the government would only consider introducing legislation if we were not sufficiently confident that voluntary measures would be implemented effectively by all relevant parties and if we assessed that risks could not be effectively mitigated using existing legal powers. Finally, prior to legislating, the government would need to be confident that we could mandate measures in a way that would significantly mitigate risk without unduly dampening innovation and competition.\u003c/p\u003e\u003cp\u003e77. We know there is more work to do to refine our approach to regulating the most capable \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems and the actors that design, develop, and deploy them. We look forward to developing our proposals by working closely with industry, academia, civil society, and the wider public. In Box 5, below, we set out the key questions that will guide our policy development.\u003c/p\u003e\u003cdiv class=\"call-to-action\"\u003e\n \u003ch4 id=\"box-5-key-questions-for-policy-development-on-the-future-regulation-of-highly-capable-general-purpose-systems\"\u003eBox 5: Key questions for policy development on the future regulation of highly capable general-purpose systems\u003c/h4\u003e\n\n \u003cp\u003eBuilding on the evidence we received to our \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper consultation on the topic of life cycle accountability and foundation models, over the coming months we will work closely with a range of experts and international partners to examine the questions below. We will publish findings from this engagement in a series of expert discussion papers. We will also publish the next iteration of our thinking and the steps we are taking in relation to the most capable \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems.\u003c/p\u003e\n\n \u003cul\u003e\n \u003cli\u003e\n \u003cp\u003eWhich specific risks should be addressed through future regulatory interventions targeted at highly capable \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems? How do we ensure the regime is resilient to future developments?\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eWhen should the government and regulators intervene? Which systems should we be targeting? What would a compound threshold for intervention look like? Is compute a useful proxy for now, if thresholds remain dynamic? What about capability benchmarking?\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eWhich obligations should be imposed on developers? Should the obligations be linked to our \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation principles? How do we ensure that the obligations are flexible but clear? At what stage could it be necessary to pause model development?\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eWhat, if any, new regulatory powers are required? How would this work alongside the existing regulatory landscape?\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eWhat should enforcement of any new regulation look like? What legal responsibilities should developers of in-scope systems have? Are updates to civil or criminal liability frameworks needed?\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eHow do we provide regulatory certainty to drive responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation while retaining an adaptable regime that can accommodate fast technical developments? How do we avoid creating barriers to market entry and scale-up?\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eShould certain capabilities trigger controls on open release? What would the negative consequences be? How should thresholds be set? What controls could be imposed?\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eWhat are the roles of existing transparency and accountability frameworks? How can strong transparency and good accountability be encouraged or assured to support responsible development of the most capable \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems?\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eShould developers of highly capable \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems be subject to specific  corporate governance requirements? Is there a role for requirements on developers of highly capable \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems to consider and mitigate risks to society or humanity at large?\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eHow do potential new measures on highly capable \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems link to wider life cycle accountability for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e? Are other actors in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e value chain also hard for regulators to reach in a way that hampers our ability to address risk and support \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation and adoption?\u003c/p\u003e\n \u003c/li\u003e\n \u003c/ul\u003e\n\u003c/div\u003e\u003cp\u003e78. As we set out in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper, our intention is for our regulatory framework to apply to the whole of the UK subject to existing exemptions and derogations for unique operating requirements, such as defence and national security. However, we recognise that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is used across a wide variety of sectors, some of which are reserved and some of which are devolved. As our policy develops and we consider the introduction of binding requirements on the developers of the most capable general-purpose systems, we will continue to assess any devolution impacts and need for extraterritorial reach.\u003c/p\u003e\u003cp\u003e79. We are committed to engaging the territorial offices and devolved administrations on both the design and delivery of the regulatory framework, so that businesses and citizens across the UK benefit from our regulatory approach.\u003c/p\u003e\u003ch3 id=\"working-with-international-partners-to-promote-effective-collaboration-on-ai-governance\"\u003e\n\u003cspan class=\"number\"\u003e5.3. \u003c/span\u003e Working with international partners to promote effective collaboration on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance\u003c/h3\u003e\u003cp\u003e80. \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e knows no borders and its impact will shape societies and economies in all corners of the world: \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e developed in one nation will increasingly affect the lives of citizens living in others. Effective governance of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e will therefore require equally impactful international cooperation, which must build on the work of existing multilateral and multi-stakeholder fora and initiatives.\u003c/p\u003e\u003cp\u003e81. The UK is an established global leader in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e with a history of driving forward the international conversation and taking clear, decisive action to build bilateral and multilateral agreement. Our focus to date has been on collaborative action to support the development of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in line with the context-based framework and principles set out in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper\u003csup id=\"fnref:83\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:83\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 83]\u003c/a\u003e\u003c/sup\u003e. This involves working alongside different groups of countries in accordance with need and acting in a targeted and proportionate manner. Our goal remains to work with others to build an international community that is able to realise the opportunities of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e on a global scale. We promote our values and collaborate where suitable to address the most pressing current and future \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risks. We carefully balance safety and innovation, acting alongside our partners to promote the international design, development, deployment, and use of the highest potential \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems.\u003c/p\u003e\u003cp\u003e82. We will continue to act through bilateral partnerships and multilateral initiatives – including future \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summits – to promote safe, secure, and trustworthy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, underpinned by effective international \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance. Throughout this we will adopt a multistakeholder approach: We will collaborate with our international partners by working with representatives from industry, academia, civil society, and government to ensure we can reap the extraordinary benefits afforded by these technologies\u003csup id=\"fnref:84\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:84\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 84]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\u003cp\u003e83. Working with these networks, we will unlock the opportunities presented by \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e while addressing potential risks. In support of this, we maintain close relationships with our international partners across the full range of issues detailed in section 5.1, as well as on our respective emerging domestic approaches.\u003c/p\u003e\u003cp\u003e84. Domestic and international approaches must develop in tandem. In developing our own approach to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation we will, therefore, both influence and respond to international developments. We will continue to proactively engage with the international landscape to ensure the appropriate degree of cooperation required for effective \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance. We will achieve appropriate levels of coherence with other regulatory regimes, promote safety, and minimise potential barriers to trade – maximising opportunities for individuals and businesses across the UK and beyond. We will continue to work with our international partners to drive the development and adoption of tools for trustworthy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, such as assurance techniques and global technical standards, in order to promote interoperability and avoid fragmentation.\u003c/p\u003e\u003cp\u003e85. We will continue to recognise the critical nature of safety in underpinning, but not supplanting, all other aspects of international \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e collaboration. As the Prime Minister Rishi Sunak set out, our “vision, and our ultimate goal, should be to work towards a more international approach to safety”\u003csup id=\"fnref:85\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:85\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 85]\u003c/a\u003e\u003c/sup\u003e. As noted above, the UK hosted the first ever \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit in November 2023 and secured the Bletchley Declaration, a landmark agreement between 29 parties, including 28 countries from across the globe and the European Union\u003csup id=\"fnref:86\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:86\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 86]\u003c/a\u003e\u003c/sup\u003e. The Declaration builds a shared understanding of the opportunities and risks that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e presents and the need for collaborative action to ensure the safety of the most powerful \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems now and in the future. A number of countries and companies developing frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e also agreed to state-led testing of the next generation of systems, including through partnerships with newly announced \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institutes (see Box 4 for more detail)\u003csup id=\"fnref:87\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:87\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 87]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\u003cp\u003e86. The pace of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e development shows no sign of slowing down, so the UK is committed to establishing enduring international collaboration on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety, building on the foundations of the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit agreements. To maintain this momentum and ensure that action is taken to secure \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety, the Republic of Korea has agreed to co-host the next \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit with the UK. France has agreed to host the following summit.\u003c/p\u003e\u003cp\u003e87. The UK’s \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute represents one of our key contributions to international collaboration on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. The Institute will partner with other countries to facilitate collaboration between governments on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety testing and governance, and develop their own capability. The Institute will facilitate international collaboration in three key ways:\u003c/p\u003e\u003cul\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cstrong\u003ePartnerships\u003c/strong\u003e: the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute has agreed a partnership with the US \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute and with the government of Singapore to collaborate on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety testing and is in regular dialogue on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety issues with international partners.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cstrong\u003eInternational Report on the Science of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety\u003c/strong\u003e\u003csup id=\"fnref:88\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:88\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 88]\u003c/a\u003e\u003c/sup\u003e: The report was first unveiled as the State of the Science Report at the UK \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit in November, where represented countries agreed to the development of an internationally authored report on the capabilities and risks of advanced \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. Rather than producing new material, it will summarise the best of existing research and identify priority research areas, providing a synthesis of the existing knowledge of risks from advanced \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cstrong\u003eInformation Exchange\u003c/strong\u003e: the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute’s evaluations and research are the first step in addressing the insight gaps between industry, governments, academia, and the public. This will ensure relevant parties, including international partners, receive the information they need to inform the development of shared protocols.\u003c/p\u003e\n \u003c/li\u003e\n\u003c/ul\u003e\u003cp\u003e88. The UK also plays a proactive role through a range of multilateral initiatives to drive forward our ambition to promote the safe and responsible design, development, deployment, and use of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. This includes:\u003c/p\u003e\u003cul\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cstrong\u003e\u003cabbr title=\"Group of Seven\"\u003eG7\u003c/abbr\u003e\u003c/strong\u003e: Working in cooperation with our partners in this forum, the UK has made significant progress to quickly respond to new technological developments and drive work on effective international \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance. In December 2023, under Japan’s Presidency, \u003cabbr title=\"Group of Seven\"\u003eG7\u003c/abbr\u003e Leaders welcomed the Hiroshima \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Process Comprehensive Policy Framework that includes international guiding principles for all \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e actors and a Code of Conduct for organisations developing advanced \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems, as well as a work plan to further advance these outcomes\u003csup id=\"fnref:89\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:89\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 89]\u003c/a\u003e\u003c/sup\u003e. We encourage \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e actors, and especially \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e developers, to further engage and support these outcomes. We look forward to collaborating further on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e under Italy’s \u003cabbr title=\"Group of Seven\"\u003eG7\u003c/abbr\u003e Presidency in 2024.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cstrong\u003e\u003cabbr title=\"Group of 20\"\u003eG20\u003c/abbr\u003e\u003c/strong\u003e: In September 2023, as part of India’s \u003cabbr title=\"Group of 20\"\u003eG20\u003c/abbr\u003e Presidency, the UK Prime Minister agreed to and endorsed the New Delhi Leaders’ Declaration alongside all other \u003cabbr title=\"Group of 20\"\u003eG20\u003c/abbr\u003e Members\u003csup id=\"fnref:90\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:90\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 90]\u003c/a\u003e\u003c/sup\u003e. The Declaration reaffirmed the UK’s commitment to the 2019 \u003cabbr title=\"Group of 20\"\u003eG20\u003c/abbr\u003e \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Principles and emphasised the importance of a governance approach that balances the benefits and risks of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and promotes responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e for achieving the \u003cabbr title=\"United Nations\"\u003eUN\u003c/abbr\u003e Sustainable Development Goals\u003csup id=\"fnref:91\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:91\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 91]\u003c/a\u003e\u003c/sup\u003e. The UK will work closely with Brazil on their \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e ambitions as part of their 2024 \u003cabbr title=\"Group of 20\"\u003eG20\u003c/abbr\u003e Presidency, which will centre on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e for inclusive sustainable development.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cstrong\u003eGlobal Partnership on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e (\u003cabbr title=\"Global Partnership on AI\"\u003eGPAI\u003c/abbr\u003e)\u003c/strong\u003e: The UK continues to actively shape \u003cabbr title=\"Global Partnership on AI\"\u003eGPAI\u003c/abbr\u003e’s multi-stakeholder project-based activities to guide the responsible development and use of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e grounded in human rights, inclusion, diversity, innovation, and economic growth. The UK was pleased to attend the December 2023 \u003cabbr title=\"Global Partnership on AI\"\u003eGPAI\u003c/abbr\u003e Summit in New Delhi, represented by the Minister for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, Viscount Camrose, and to both endorse the \u003cabbr title=\"Global Partnership on AI\"\u003eGPAI\u003c/abbr\u003e New Delhi Ministerial Declaration\u003csup id=\"fnref:92\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:92\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 92]\u003c/a\u003e\u003c/sup\u003e and host a side-event on outcomes and next steps following the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit. The UK has also begun a two-year mandate as a Steering Committee member and will work with India’s Chairmanship to ensure \u003cabbr title=\"Global Partnership on AI\"\u003eGPAI\u003c/abbr\u003e is reaching its full potential.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cstrong\u003eCouncil of Europe\u003c/strong\u003e: The UK is continuing to work closely with like-minded nations on the proposed Council of Europe Convention on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e to help protect human rights, democracy, and rule of law. The Convention offers an opportunity to ensure these important values are codified internationally as one part of a wider approach to effective international governance.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cstrong\u003eOrganisation for Economic Co-operation and Development (\u003cabbr title=\"Organisation for Economic Co-operation and Development\"\u003eOECD\u003c/abbr\u003e)\u003c/strong\u003e: The UK is an active member of the Working Party on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Governance (\u003cabbr title=\"artificial intelligence Governance\"\u003eAIGO\u003c/abbr\u003e) and recognises the forum’s role in supporting the implementation of the \u003cabbr title=\"Organisation for Economic Co-operation and Development\"\u003eOECD\u003c/abbr\u003e \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Principles and enabling the exchange of experience and best practice across member countries. In 2024, the UK will support the revision of the \u003cabbr title=\"Organisation for Economic Co-operation and Development\"\u003eOECD\u003c/abbr\u003e \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Principles\u003csup id=\"fnref:93\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:93\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 93]\u003c/a\u003e\u003c/sup\u003e and continue to provide case studies from the UK’s Portfolio of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Assurance Techniques\u003csup id=\"fnref:94\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:94\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 94]\u003c/a\u003e\u003c/sup\u003e to the \u003cabbr title=\"Organisation for Economic Co-operation and Development\"\u003eOECD\u003c/abbr\u003e’s Catalogue of Tools and Metrics of Tools for Trustworthy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e\u003csup id=\"fnref:95\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:95\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 95]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cstrong\u003eUnited Nations (\u003cabbr title=\"United Nations\"\u003eUN\u003c/abbr\u003e) and its associated agencies\u003c/strong\u003e: Given the organisation’s unique role in convening a wide range of nations, the UK recognises the value of the \u003cabbr title=\"United Nations\"\u003eUN\u003c/abbr\u003e-led discussions on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and engages regularly to shape global norms on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. In July 2023, the UK initiated and chaired the first \u003cabbr title=\"United Nations\"\u003eUN\u003c/abbr\u003e Security Council briefing session on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, and the Deputy Prime Minister chaired a session on frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks at \u003cabbr title=\"United Nations\"\u003eUN\u003c/abbr\u003e High Level Week in September 2023. The UK continues to collaborate with a range of partners across \u003cabbr title=\"United Nations\"\u003eUN\u003c/abbr\u003e \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e initiatives, including negotiations for the Global Digital Compact, which aims to facilitate the Sustainable Development Goals through technologies such as \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, monitoring the implementation of the \u003cabbr title=\"United Nations Educational, Scientific and Cultural Organization\"\u003eUNESCO\u003c/abbr\u003e Recommendation on the Ethics of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e\u003csup id=\"fnref:96\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:96\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 96]\u003c/a\u003e\u003c/sup\u003e, and engaging constructively at the International Telecommunication Union, which hosted the ‘\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e for Good’ Summit in July 2023. The UK will also continue to work closely with the \u003cabbr title=\"United Nations\"\u003eUN\u003c/abbr\u003e \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Advisory Body and is closely reviewing its interim report: Governing \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e for Humanity\u003csup id=\"fnref:97\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:97\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 97]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cstrong\u003eGlobal Standards Development Organisations (\u003cabbr title=\"Standards Development Organisations\"\u003eSDOs\u003c/abbr\u003e)\u003c/strong\u003e: The UK is engaging directly with \u003cabbr title=\"Standards Development Organisations\"\u003eSDOs\u003c/abbr\u003e, such as the \u003cabbr title=\"International Organization for Standardization\"\u003eISO\u003c/abbr\u003e and \u003cabbr title=\"International Electrotechnical Commission\"\u003eIEC\u003c/abbr\u003e, and is supporting developments in technical \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e standards. The UK champions a global digital standards ecosystem that is open, transparent, and consensus-based. The UK also aims to support innovation and strengthen a multi-stakeholder, industry-led model for the development of technical \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e standards, including through initiatives such as the UK’s \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Standards Hub\u003csup id=\"fnref:98\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:98\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 98]\u003c/a\u003e\u003c/sup\u003e. We support UK stakeholders to participate in \u003cabbr title=\"Standards Development Organisations\"\u003eSDOs\u003c/abbr\u003e to both leverage the benefits of global technical standards here in the UK and deliver global digital technical standards shaped by democratic values.\u003c/p\u003e\n \u003c/li\u003e\n\u003c/ul\u003e\u003cp\u003e89. Additionally, the UK is committed to ensuring that the benefits of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e are widely accessible. This includes working with international partners to fund safe and responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e projects for development around the world. As announced at the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit, the UK is contributing £38 million through its new \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e for Development programme to support safe, responsible and inclusive \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation to accelerate progress on development challenges, focused initially in Africa\u003csup id=\"fnref:99\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:99\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 99]\u003c/a\u003e\u003c/sup\u003e. This is part of an £80 million boost in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e programming to combat inequality and boost prosperity in Africa, with the UK working alongside Canada, the Bill and Melinda Gates Foundation, the USA, Google, Microsoft, and African partners, including Kenya, Nigeria, and Rwanda among others.\u003c/p\u003e\u003cp\u003e90. \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is now also fundamental to our bilateral relationships and, in some cases, it is suitable to build deeper and more committed bilateral partnerships alongside multilateral engagement to further our shared interests. We have therefore pursued bilateral agreements on areas including responsibly developing and deploying \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e with key international partners, to build the foundation for further collaboration on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance. For example, as part of the \u003cabbr title=\"Department for Science, Innovation and Technology\"\u003eDSIT\u003c/abbr\u003e International Science Partnerships Fund\u003csup id=\"fnref:100\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:100\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 100]\u003c/a\u003e\u003c/sup\u003e, \u003cabbr title=\"UK Research and Innovation\"\u003eUKRI\u003c/abbr\u003e will invest £9 million to bring together researchers and innovators in bilateral research partnerships with the US. These partnerships will focus on developing safer, responsible, and trustworthy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e as well as \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e for scientific uses. Since the publication of the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper in March 2023 we have signed:\u003c/p\u003e\u003cul\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cstrong\u003eThe Atlantic Declaration with the US\u003c/strong\u003e\u003csup id=\"fnref:101\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:101\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 101]\u003c/a\u003e\u003c/sup\u003e: which develops our strong partnership on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, underpinned by our shared democratic values and our ambition to promote safe and responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation across the world. Work under the 2023 Atlantic Declaration will ensure that our unique alliance is reinforced for the challenges of new technological developments.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cstrong\u003eThe Hiroshima Accord with Japan\u003c/strong\u003e\u003csup id=\"fnref:102\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:102\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 102]\u003c/a\u003e\u003c/sup\u003e: which commits to focus on promoting human-centric and trustworthy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and interoperability between our \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance frameworks.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cstrong\u003eThe Downing Street Accord with the Republic of Korea\u003c/strong\u003e\u003csup id=\"fnref:103\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:103\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 103]\u003c/a\u003e\u003c/sup\u003e: which builds on the progress achieved on safe, responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e development, including at the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit – the next edition of which will be co-hosted by the Republic of Korea and the UK.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cstrong\u003eThe Joint Declaration on a Strategic Partnership with Singapore\u003c/strong\u003e\u003csup id=\"fnref:104\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:104\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 104]\u003c/a\u003e\u003c/sup\u003e: which harnesses expertise in new technologies such as \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e from the UK and Singapore. \u003cabbr title=\"Department for Science, Innovation and Technology\"\u003eDSIT\u003c/abbr\u003e also signed a Memorandum of Understanding (\u003cabbr title=\"Memorandum of Understanding\"\u003eMoU\u003c/abbr\u003e) on Emerging Technologies in June 2023 with Singapore’s Infocomm Media Development Authority (\u003cabbr title=\"Singapore’s Infocomm Media Development Authority\"\u003eIMDA\u003c/abbr\u003e). In this \u003cabbr title=\"Memorandum of Understanding\"\u003eMoU\u003c/abbr\u003e, both parties agreed to collaborate on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance and to facilitate the development of effective and interoperable \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e assurance mechanisms.\u003c/p\u003e\n \u003c/li\u003e\n\u003c/ul\u003e\u003cp\u003e91. We have a number of other important bilateral relationships on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e with countries across the world and we intend, where suitable, to further build such agreements to strengthen these partnerships, such as through bilateral MoUs and Free Trade Agreements.\u003c/p\u003e\u003cp\u003e92. Only through effective global collaboration will the UK and our partners worldwide unlock the opportunities and mitigate the associated risks of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. We will continue to engage our international partners to support responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation that effectively and proportionately addresses potential \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e harms and aligns with the principles established in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper. We will also work together to promote coherence between our \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance frameworks to ensure that businesses can operate effectively in both the UK and wider global markets and to ensure that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e developments benefit people around the world.\u003c/p\u003e\u003ch3 id=\"an-ai-regulation-roadmap-of-our-next-steps\"\u003e\n\u003cspan class=\"number\"\u003e5.4. \u003c/span\u003e An \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation roadmap of our next steps\u003c/h3\u003e\u003cp\u003e93. In 2024, we will:\u003c/p\u003e\u003cul\u003e\n \u003cli\u003e\n \u003cp\u003eContinue to develop our domestic policy position on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation by:\u003c/p\u003e\n\n \u003cul\u003e\n \u003cli\u003e\n \u003cp\u003eEngaging with a range of experts on interventions for highly capable \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems, including questions on open release, in the summer.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003ePublishing an update on our work on new responsibilities for developers of highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems by the end of the year.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eCollaborating across government and with regulators to analyse and review potential gaps in existing regulatory powers and remits on an ongoing basis.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eWorking closely with the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute, which will provide foundational insights to our central \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risk assessment activities and inform our approach to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation, on an ongoing basis. The \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute will ensure that the UK takes an evidence-based, proportionate response to regulating the risks of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.\u003c/p\u003e\n \u003c/li\u003e\n \u003c/ul\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eProgress action to promote \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e opportunities and tackle \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks by:\u003c/p\u003e\n\n \u003cul\u003e\n \u003cli\u003e\n \u003cp\u003eConducting targeted engagement on our cross-economy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risk register and plan to assess the regulatory framework from the spring onwards.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eReleasing a call for views in spring to obtain further input on our next steps in securing \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models, including a potential Code of Practice for cyber security of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, based on \u003cabbr title=\"National Cyber Security Centre\"\u003eNCSC\u003c/abbr\u003e’s guidelines.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eEstablishing a new international dialogue to defend democracy and address shared risks related to electoral interference ahead of the next \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eLaunching a call for evidence on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risks to trust in information and related issues such as deepfakes.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eExploring mechanisms for providing greater transparency, including measures so that rights holders can better understand whether content they produce is used as an input into \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003ePhasing in the mandatory requirement for central government departments to use the Algorithmic Transparency Recording Standard (\u003cabbr title=\"Algorithmic Transparency Recording Standard\"\u003eATRS\u003c/abbr\u003e) over the course of the year.\u003c/p\u003e\n \u003c/li\u003e\n \u003c/ul\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eBuild out the central function and support regulators by:\u003c/p\u003e\n\n \u003cul\u003e\n \u003cli\u003e\n \u003cp\u003eLaunching a new £10 million programme to support regulators to identify and understand risks in their domain and to develop their skills and approaches to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eEstablishing a steering committee to support and guide the activities of a formal regulator coordination structure within government in the spring.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAsking key regulators to publish updates on their strategic approach to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e by 30 April.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eCollaborating with regulators to iterate and expand our initial cross-sectoral guidance on implementing the principles, with further updates planned by summer.\u003c/p\u003e\n \u003c/li\u003e\n \u003c/ul\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eEncourage effective \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e adoption and provide support for industry, innovators, and employees by:\u003c/p\u003e\n\n \u003cul\u003e\n \u003cli\u003e\n \u003cp\u003eLaunching the pilot \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and Digital Hub with the \u003cabbr title=\"Digital Regulation Cooperation Forum\"\u003eDRCF\u003c/abbr\u003e in the spring.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003ePublishing an Introduction to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Assurance in spring.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003ePublishing updated guidance on the use of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e within \u003cabbr title=\"Human Resources\"\u003eHR\u003c/abbr\u003e and recruitment in spring.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003ePublishing a full \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e skills framework that incorporates feedback to our consultation and supports employers, employees, and training providers to identify upskilling routes for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in spring.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eLaunching the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Management Essentials scheme to set a minimum good practice standard for companies selling \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e products and services by the end of the year.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003ePublishing an update on our emerging processes guide by the end of the year.\u003c/p\u003e\n \u003c/li\u003e\n \u003c/ul\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eSupport international collaboration on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance by:\u003c/p\u003e\n\n \u003cul\u003e\n \u003cli\u003e\n \u003cp\u003eActioning our newly announced £9 million partnership with the US on responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e as part of the \u003cabbr title=\"Department for Science, Innovation and Technology\"\u003eDSIT\u003c/abbr\u003e International Science Partnerships Fund.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003ePublishing the first iteration of the International Report on the Science of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety in spring.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eSharing new knowledge with international partners through the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute on an ongoing basis.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eSupporting the Republic of Korea and France on the next \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summits on an ongoing basis, and considering the possible role of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summits beyond these.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eContinuing bilateral and multilateral partnerships on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, including the \u003cabbr title=\"Group of Seven\"\u003eG7\u003c/abbr\u003e, \u003cabbr title=\"Group of 20\"\u003eG20\u003c/abbr\u003e, Council of Europe, \u003cabbr title=\"Organisation for Economic Co-operation and Development\"\u003eOECD\u003c/abbr\u003e, United Nations, and \u003cabbr title=\"Global Partnership on AI\"\u003eGPAI\u003c/abbr\u003e, on an ongoing basis.\u003c/p\u003e\n \u003c/li\u003e\n \u003c/ul\u003e\n \u003c/li\u003e\n\u003c/ul\u003e" } }, { "@type": "Question", "name": "\n6. Summary of consultation evidence and government response", "url": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#summary-of-consultation-evidence-and-government-response", "acceptedAnswer": { "@type": "Answer", "url": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#summary-of-consultation-evidence-and-government-response", "text": "\u003cp\u003e94. This chapter provides a summary of the written evidence we received in response to our consultation followed by the government response. This chapter is structured by the 10 categories that we used to group our 33 consultation questions:\u003c/p\u003e\u003cul\u003e\n \u003cli\u003eThe revised cross-sectoral \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e principles.\u003c/li\u003e\n \u003cli\u003eA statutory duty to regard.\u003c/li\u003e\n \u003cli\u003eNew central functions to support the framework.\u003c/li\u003e\n \u003cli\u003eMonitoring and evaluation of the framework.\u003c/li\u003e\n \u003cli\u003eRegulator capabilities.\u003c/li\u003e\n \u003cli\u003eTools for trustworthy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.\u003c/li\u003e\n \u003cli\u003eFinal thoughts.\u003c/li\u003e\n \u003cli\u003eLegal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.\u003c/li\u003e\n \u003cli\u003eFoundation models and the regulatory framework.\u003c/li\u003e\n \u003cli\u003e\n\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e sandboxes and testbeds.\u003c/li\u003e\n\u003c/ul\u003e\u003cp\u003e95. In total, we received 409 written consultation responses from organisations and individuals. Annex A provides an overview of who we received responses from and outlines our method of analysis. We also proactively engaged with 364 individuals through roundtables, technical workshops, bilaterals, and a programme of ongoing regulator engagement. While we weave insights from this engagement throughout our analysis, Annex A provides a detailed overview of our engagement findings.\u003c/p\u003e\u003ch3 id=\"the-revised-cross-sectoral-ai-principles\"\u003e\n\u003cspan class=\"number\"\u003e6.1. \u003c/span\u003e The revised cross-sectoral \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e principles\u003c/h3\u003e\u003cp\u003e\u003cstrong\u003e1. Do you agree that requiring organisations to make it clear when they are using \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e would improve transparency?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e2. Are there other measures we could require of organisations to improve transparency for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e3. Do you agree that current routes to contest or get redress for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related harms are adequate?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e4. How could current routes to contest or seek redress for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related harms be improved, if at all?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e5. Do you agree that, when implemented effectively, the revised cross-sectoral principles will cover the risks posed by \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e6. What, if anything, is missing from the revised principles?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003eSummary of questions 1-6:\u003c/p\u003e\u003cp\u003e96. Over half of respondents agreed that, when implemented effectively, the revised principles would cover the key risks posed by \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies. The revised principles included safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. However, respondents also advocated for the explicit inclusion of human rights, operational resilience, data quality, international alignment, systemic risks and wider societal impacts, sustainability, and education and literacy.\u003c/p\u003e\u003cp\u003e97. Respondents wanted to see further detail on the implementation of the principles, regulator capability, and interactions with existing law. Respondents consistently stressed the fast pace of technological change and reflected that the framework should be adaptable and supported by monitoring and evaluation. Some respondents were concerned that the principles would not be sufficiently enforceable, citing a lack of statutory backing.\u003c/p\u003e\u003cp\u003e98. There was strong support for a range of transparency measures from respondents. Respondents emphasised that transparency was key to building public trust, accountability, and an effective and verifiable regulatory framework. A majority of respondents agreed that a requirement for organisations to make it clear when they are using \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e would improve transparency. Those who disagreed felt that labelling \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e use would be either insufficient or disproportionately burdensome. Respondents suggested a range of transparency measures including the public disclosure of inputs like compute and data; labelling \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e use and outputs; opt-ins and human alternatives to automated processing; explanations for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e outcomes, impacts and limitations; public or organisational \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e registers; disclosure of model details to regulators; and independent assurance tools including audits and technical standards.\u003c/p\u003e\u003cp\u003e99. Most respondents reported that current routes to contest or seek redress for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related harms through existing legal frameworks are not adequate. Respondents noted that it can be difficult to identify \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related harms and the high costs of litigation often prevents individuals from seeking redress. Many respondents wanted to see the government clarify the legal rights and responsibilities relating to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, with many suggesting doing so through regulatory guidance. Some endorsed the introduction of statutory requirements. Respondents recommended establishing accessible redress routes, with some advocating for a central, cross-sector redress mechanism such as a dedicated \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e ombudsman. Respondents also noted that international agreements would be needed to ensure effective routes to contest or seek redress for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related harms across borders. Respondents emphasised that better \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e transparency would help make redress more accessible across a broad range of potential harms, including intellectual property infringement.\u003c/p\u003e\u003cp\u003eResponse:\u003c/p\u003e\u003cp\u003e100. The government wants to ensure that the UK maintains its position as a global leader in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. This means promoting safe, responsible innovation to ensure that we maximise the benefits \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e can bring across the country. Our cross-sectoral principles set out our expectations for the responsible design, development, and application of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e to help guide businesses and organisations building and using these technologies. We are encouraged to see that most respondents agree that the revised cross-sectoral principles will cover the risks posed by \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e when implemented effectively.\u003c/p\u003e\u003cp\u003e101. We expect regulators to apply the principles within their existing remits and in line with our existing laws and values, respecting the UK’s long history of democracy, strong rule of law, and commitments to human rights and environmental sustainability. As aspects of these values and rules are enshrined in the law that regulators are bound to follow, we do not think it is necessary to include democracy, human rights, the rule of law, or sustainability specifically within the principles themselves. The guidance we are publishing alongside this consultation response will support regulators to implement the principles within their respective domains.\u003c/p\u003e\u003cp\u003e102. The principles already cover issues raised by respondents linked to both operational resilience (safety, security, and robustness) and data protection (transparency, fairness, and accountability). We expect all actors across the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e life cycle to adhere to existing legal frameworks, including data protection law. The UK’s existing data protection legislation (UK \u003cabbr title=\"UK General Data Protection Regulation\"\u003eGDPR\u003c/abbr\u003e and the Data Protection Act 2018) regulates the development of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems and other technologies where personal data is involved. The Data Protection and Digital Information Bill will clarify the rights of data subjects to specific safeguards when subject to solely automated decisions that have significant effects on them. Furthermore, the Information Commissioner’s Office (\u003cabbr title=\"Information Commissioner's Office\"\u003eICO\u003c/abbr\u003e) has created specific guidance on how to use data for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in compliance with data protection law. Beyond the scope of data protection law\u003csup id=\"fnref:105\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:105\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 105]\u003c/a\u003e\u003c/sup\u003e. Beyond the scope of data protection law, the government is assessing a range of possible interventions aligned with the principles as part of our work to encourage the responsible and safe development of highly capable \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. For example, we are exploring if and how to introduce targeted measures on developers of highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems related to transparency requirements (for example, on training data), risk management, and accountability and corporate governance related obligations. Similarly, our central risk assessment activities will identify and monitor a range of risks, providing cross-economy oversight that will capture systemic risks and wider societal impacts.\u003c/p\u003e\u003cp\u003e103. We acknowledge the broad support for transparency and we will continue our work assessing whether and which measures provide the most meaningful transparency for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e end users and actors across the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e life cycle. It is important that we take an evidence-based approach to transparency. The Algorithmic Transparency Recording Standard (\u003cabbr title=\"Algorithmic Transparency Recording Standard\"\u003eATRS\u003c/abbr\u003e) is a practical mechanism for transparency that was developed through public engagement and has been piloted across the UK\u003csup id=\"fnref:106\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:106\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 106]\u003c/a\u003e\u003c/sup\u003e. The \u003cabbr title=\"Algorithmic Transparency Recording Standard\"\u003eATRS\u003c/abbr\u003e helps public sector organisations provide clear information about algorithmic tools they use in decision-making. As mentioned in section 5.1, we will now be making use of the \u003cabbr title=\"Algorithmic Transparency Recording Standard\"\u003eATRS\u003c/abbr\u003e a requirement for all government departments and plan to expand this across the broader public sector over time. While measures like watermarking can help users identify \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e generated content, we need to ensure that proposed interventions are robust, cannot be easily overridden, and achieve positive outcomes. To establish greater transparency on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e outputs, we published an “Emerging processes for frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety” document that outlines three areas of practice related to identifying \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e generated content, including research techniques, watermarking, and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e output databases\u003csup id=\"fnref:107\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:107\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 107]\u003c/a\u003e\u003c/sup\u003e. As mentioned in section 5.2.2, we will update this guide by the end of the year and continue to encourage \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e companies to develop best practices.\u003c/p\u003e\u003cp\u003e104. Our expert regulators are already using their existing remits to implement the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e principles, including the contestability and redress principle which includes expectations about clarifying existing routes to redress. We recognise the link between the fair and effective allocation of liability throughout the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e life cycle and the availability and clarity of routes to redress. Our work to explore existing liability frameworks and accountability through the value chain is ongoing and includes analysis of the existence of redress mechanisms. As a first step towards ensuring fair and effective allocation of accountability and liability, the government is considering introducing targeted binding requirements on developers of highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems which may involve creating or allocating new regulatory powers.\u003c/p\u003e\u003ch3 id=\"a-statutory-duty-to-regard\"\u003e\n\u003cspan class=\"number\"\u003e6.2. \u003c/span\u003e A statutory duty to regard\u003c/h3\u003e\u003cp\u003e\u003cstrong\u003e7. Do you agree that introducing a statutory duty on regulators to have due regard to the principles would clarify and strengthen regulators’ mandates to implement our principles, while retaining a flexible approach to implementation?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e8. Is there an alternative statutory intervention that would be more effective?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003eSummary of questions 7-8:\u003c/p\u003e\u003cp\u003e105. Most respondents somewhat or strongly agreed that introducing a statutory duty on regulators to have due regard to the principles set out in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper would clarify and strengthen regulators’ mandates to implement the principles while retaining a flexible approach to implementation. However, nearly a quarter noted that regulators would need enhanced resources and capabilities in order to enact a statutory duty effectively.\u003c/p\u003e\u003cp\u003e106. Around a third of respondents argued that additional, targeted statutory measures would be necessary to effectively implement the regulatory framework. Many suggested expanding regulator powers, noting that the existing statutory remits of some regulators would limit their ability to implement the framework. In particular, respondents raised the need to review and potentially expand the investigatory powers and capabilities of regulators in regard to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.\u003c/p\u003e\u003cp\u003e107. Some advocated for wider, horizontal statutory measures such as specific \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e legislation, a new \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulator, and strict rules about the use of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in certain contexts.\u003c/p\u003e\u003cp\u003e108. Other respondents felt that, if rushed, the implementation of a duty to regard could disrupt regulation, innovation, and trust. These respondents recommended that the duty should be reviewed after a period of non-statutory implementation, particularly to observe interactions with existing law and regulatory remits. Some respondents 1. noted that the end goal and timeframes for the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulatory framework were not clear, causing uncertainty.\u003c/p\u003e\u003cp\u003eResponse:\u003c/p\u003e\u003cp\u003e109. We are encouraged that respondents to this question are enthusiastic about the proper and effective implementation of our cross-sectoral \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e principles. We welcome the broad support for a statutory duty on regulators, recognising that respondents also gave conditions and alternatives that could be used to implement the framework effectively. As set out in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper, we anticipate introducing a statutory duty on regulators requiring them to have due regard to the principles after reviewing an initial period of non-statutory implementation.\u003c/p\u003e\u003cp\u003e110. We acknowledge concerns from respondents that rushing the implementation of a duty to regard could cause disruption to responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation. We will not rush to legislate but will evaluate whether it is necessary and effective to introduce a statutory duty to have due regard to the principles on regulators. We currently think that a non-statutory approach offers critical adaptability but we will keep this under review, for example by assessing the updates on strategic approaches to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e that the government has asked a number of regulators to publish by 30 April 2024. We will also work with government departments and regulators to analyse and review potential gaps in existing regulatory powers and remits.\u003c/p\u003e\u003cp\u003e111. We are pleased to see that many regulators are taking proactive steps to address \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and implement the principles within their remits. This includes work by the Competition and Markets Authority (\u003cabbr title=\"Competition and Markets Authority\"\u003eCMA\u003c/abbr\u003e), Advertising Standards Authority (ASA), and Office of Communications (\u003cabbr title=\"Office of Communications\"\u003eOfcom\u003c/abbr\u003e)\u003csup id=\"fnref:108\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:108\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 108]\u003c/a\u003e\u003c/sup\u003e. Others are progressing their existing plans in ways that align with these principles, such as the \u003cabbr title=\"Information Commissioner's Office\"\u003eICO\u003c/abbr\u003e and Medicines and Healthcare products Regulatory Agency (\u003cabbr title=\"Medicines and Healthcare products Regulatory Agency\"\u003eMHRA\u003c/abbr\u003e)\u003csup id=\"fnref:109\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:109\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 109]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\u003cp\u003e112. We continue to work closely with regulators to develop the framework, ensure coherent implementation, and build regulator capability. To support a coherent approach across sectors, we are publishing initial guidance to regulators alongside this response on how to apply the cross-sectoral \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e principles within their existing remits. We will update this guidance over time to ensure that it reflects developments in our regime and technological advances in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. We will establish a steering committee by spring 2024 to support and guide the activity of the central regulator coordination function (see section 5.1.2 for details).\u003c/p\u003e\u003cp\u003e113. We note respondents’ concerns across the consultation that any new rules for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e should not contradict or duplicate existing laws. We will continue to evaluate any potential gaps or frictions within the existing statutory remits of regulators and current legislative frameworks. In the white paper, we said that we would keep the wider \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e landscape under review in order to inform future iterations of the regulatory framework, including whether further interventions on foundation models may be required. We will consult on our plan for monitoring and evaluating the regulatory framework in 2024 (see our response to questions on monitoring and evaluation in section 6.4 for more detail).\u003c/p\u003e\u003ch3 id=\"new-central-functions-to-support-the-framework\"\u003e\n\u003cspan class=\"number\"\u003e6.3. \u003c/span\u003e New central functions to support the framework\u003c/h3\u003e\u003cp\u003e\u003cstrong\u003e9. Do you agree that the functions outlined in section 3.3.1 would benefit our \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation framework if delivered centrally?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e10. What, if anything, is missing from the central functions?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e11. Do you know of any existing organisations who should deliver one or more of our proposed central functions?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e12. Are there additional activities that would help businesses confidently innovate and use \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e12.1. If so, should these activities be delivered by government, regulators, or a different organisation?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e13. Are there additional activities that would help individuals and consumers confidently use \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e13.1. If so, should these activities be delivered by government, regulators, or a different organisation?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e14. How can we avoid overlapping, duplicative, or contradictory guidance on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e issued by different regulators?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003eSummary of questions 9-14:\u003c/p\u003e\u003cp\u003e114. Nearly all respondents agreed that delivering the proposed functions centrally would benefit the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation framework, with many praising the approach for ensuring that the government can monitor and iterate the framework.\u003c/p\u003e\u003cp\u003e115. While respondents widely supported the proposed central functions, many wanted more detail on each function and its activities. Some respondents felt there should be a greater emphasis on partnerships and collaboration to deliver the activities. Respondents also wanted more detail on international collaboration. Some suggested that the government should prioritise building the central risk function. Of these responses, a few noted that more consideration should be given to ethical and societal risks.\u003c/p\u003e\u003cp\u003e116. Respondents emphasised that the regulatory functions should build from the existing strengths of the UK’s regulatory landscape, with approximately a third identifying regulators as organisations who should deliver one or more central functions. Overall, respondents emphasised that effective delivery would require collaboration between government, regulators, industry, civil society, academia, and the general public. Over a quarter of respondents felt that technology-focused research institutes and think tanks could help deliver the central functions.\u003c/p\u003e\u003cp\u003e117. Respondents suggested a range of additional activities that government and regulators could offer to support industry. Around a third of respondents felt that training products and educational resources would help organisations to apply the principles to everyday business practices. Nearly a quarter suggested that regulators should produce guidance to allow businesses to innovate confidently. Some noted the importance of internationally interoperable frameworks for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation to ensure a low compliance burden on organisations building, selling, and using \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies. Respondents also argued that more work is needed to ensure that businesses have access to high-quality, diverse, and ethically-sourced data to support their \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation efforts.\u003c/p\u003e\u003cp\u003e118. When thinking about additional activities for individuals and consumers, respondents prioritised transparency from the cross-sectoral principles, with nearly half arguing that individuals and consumers should be able to identify when and how \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is being used by a service or organisation. More than a third of respondents felt that education and training would enable consumers to use \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e products and services safely and more effectively.\u003c/p\u003e\u003cp\u003e119. Around a third suggested that the proposed central functions would be the most effective mechanism to avoid overlapping, duplicative, or contradictory guidance.\u003c/p\u003e\u003cp\u003eResponse:\u003c/p\u003e\u003cp\u003e120. We welcome the strong support for the central functions proposed in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper to coordinate, monitor, and adapt the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e framework. Together, these functions will provide clarity, ensure the framework works as intended, and future-proof the UK’s regulatory approach. That is why we have already started to establish the central function within government to undertake the activities proposed in the white paper (see section 5.1.2 for details).\u003c/p\u003e\u003cp\u003e121. We note respondents’ concerns around the potential risks posed by the rapid developments in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technology. We have already established the risk monitoring and assessment activities of the central function within \u003cabbr title=\"Department for Science, Innovation and Technology\"\u003eDSIT\u003c/abbr\u003e, reflecting the strong recommendation from respondents to operationalise cross-economy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risk management as a priority. Our centralised risk assessment activities will identify, measure, and monitor existing and emerging \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks using expertise from across government, industry, and academia, including the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute. This will allow us to monitor risks holistically and identify any potential gaps in our approach. Horizon scanning will extend our central risk assessment activities, monitoring emerging \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e trends and opportunities to maximise benefits while taking a proportionate approach to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks. This year, we will conduct targeted engagement on our cross-economy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risk register.\u003c/p\u003e\u003cp\u003e122. Reflecting respondents’ views that the proposed central function will help regulators avoid producing overlapping, duplicative, or contradictory guidance, we are developing a coordination function to support regulators to interpret and apply the principles within their remits (see section 5.1.2 for detail). As part of this, we will establish a steering committee in the spring with government representatives and key regulators to support knowledge exchange and coordination on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance. To further support regulators and ensure that the UK’s strength in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e research is fully utilised in our regulatory framework, we have also announced a £10 million package to support regulator \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e capabilities and a new commitment by UK Research and Innovation (\u003cabbr title=\"UK Research and Innovation\"\u003eUKRI\u003c/abbr\u003e) to improve links between regulators and the skills, expertise, and activities supported by their future investments in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e research.\u003c/p\u003e\u003cp\u003e123. To ensure appropriate levels of cohesion with emerging approaches to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation in other jurisdictions, we will continue to work with international partners on regulatory interoperability, including technical standards and assurance techniques, to make it easier for UK companies to attract overseas investment and trade internationally. For more detail, see section 5.3 and our response to questions on tools for trustworthy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in 6.6.\u003c/p\u003e\u003cp\u003e124. Alongside this, we have announced a new pilot regulatory service to be hosted by the Digital Regulation Cooperation Forum (\u003cabbr title=\"Digital Regulation Cooperation Forum\"\u003eDRCF\u003c/abbr\u003e) to make it easier for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and digital innovators to navigate the regulatory landscape (see our response to questions on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e sandboxes for more detail: section 6.10).\u003c/p\u003e\u003cp\u003e125. We remain committed to the iterative approach set out in the white paper, anticipating that our framework will need to evolve as new risks or regulatory gaps emerge. Our monitoring and evaluation activities will assess if, when, and how we make changes to our framework, gathering evidence from a wide range of sources. We provide more detail in our response to questions on monitoring and evaluation in section 6.4.\u003c/p\u003e\u003cp\u003e126. We are encouraged that respondents endorsed a wide range of organisations in the UK as useful partners to deliver the proposed centralised activities. As we said in the white paper, the government will deliver the central function initially, working in partnership with regulators and other key actors in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e ecosystem. The government’s primary role will be to leverage existing activities where possible and ensure that all the necessary activities to promote responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation are taking place.\u003c/p\u003e\u003ch3 id=\"monitoring-and-evaluation-of-the-framework\"\u003e\n\u003cspan class=\"number\"\u003e6.4. \u003c/span\u003e Monitoring and evaluation of the framework\u003c/h3\u003e\u003cp\u003e\u003cstrong\u003e15. Do you agree with our overall approach to monitoring and evaluation?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e16. What is the best way to measure the impact of our framework?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e17. Do you agree that our approach strikes the right balance between supporting \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation; addressing known, prioritised risks; and future-proofing the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation framework?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003eSummary of questions 15-17:\u003c/p\u003e\u003cp\u003e127. A majority of respondents agreed with the overall approach to monitoring and evaluation, commending the proposed feedback loop with industry and civil society as a means to gain insights about the effectiveness of the framework.\u003c/p\u003e\u003cp\u003e128. Just over a quarter of respondents emphasised that engaging with a diverse range of stakeholders would create the most valuable insights. Many advocated for the inclusion of wider civil society and consumer representatives to ensure that voices outside of the tech industry are heard, as well as regular engagement with industry and research experts. Respondents also stressed that international engagement would be key to effectively harmonise approaches across jurisdictions.\u003c/p\u003e\u003cp\u003e129. Respondents wanted to see more detail on the practicalities of the monitoring and evaluation framework, including how data will be collected and used to measure success. Nearly a third of respondents suggested the impact of the framework should be measured through a range of data sources, and recommended collecting data on key indicators as well as using impact assessments.\u003c/p\u003e\u003cp\u003e130. Half of respondents agreed that the approach appears to strike the right balance between supporting \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation; addressing known, prioritised risks; and future-proofing the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation framework. However, some respondents disagreed and argued that the approach prioritised \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation and economic growth over safety and the mitigation of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risks.\u003c/p\u003e\u003cp\u003eResponse:\u003c/p\u003e\u003cp\u003e131. We are pleased to note the positive feedback on our proposed approach to the monitoring and evaluation of the framework. Monitoring and evaluation activities will allow us to review the implementation of the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation framework across the economy and is at the heart of our iterative approach. It will ensure that the regime is working as intended: actively responding to prioritised risks, supporting innovation, and maximising the benefits of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e across the UK. We agree with respondents that, as we implement the framework set out in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper, monitoring and evaluation will allow the government to spot potential issues and adapt the framework in response if needed.\u003c/p\u003e\u003cp\u003e132. We acknowledge growing concerns that we may face more safety risks related to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e as these technologies are increasingly used. We recognise that many of these concerns focus on the advanced capabilities of the most powerful \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems. That is why we remain committed to an adaptable approach that will evolve as new risks or regulatory gaps emerge. Our initial thinking on potential new measures targeted at the developers of highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models is presented in section 5.2. The \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute will advance \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety capabilities for the public interest, allowing the government to respond to the cutting-edge of technological development. Our monitoring and evaluation will build on work by the Institute, our cross-sectoral risk assessment, and feedback from stakeholders to understand how the regulatory framework is performing. Our evaluation will consider whether the framework is effectively achieving the objectives set out in the white paper, including building public trust by addressing potential risks appropriately.\u003c/p\u003e\u003cp\u003e133. We note the emphasis from respondents on using the right data, metrics, and sources to evaluate how well the regulatory framework is performing. We agree that it is key to the effectiveness of the framework to get the measures of success right, and we are actively working on this as we develop our monitoring and evaluation framework for publication. We will conduct a targeted consultation on our proposed plan to assess the framework with a range of stakeholders in spring. As part of this, we will seek detailed views on our proposed metrics and data sources.\u003c/p\u003e\u003ch3 id=\"regulator-capabilities\"\u003e\n\u003cspan class=\"number\"\u003e6.5. \u003c/span\u003e Regulator capabilities\u003c/h3\u003e\u003cp\u003e\u003cstrong\u003e18. Do you agree that regulators are best placed to apply the principles and government is best placed to provide oversight and deliver central functions?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e19. As a regulator, what support would you need in order to apply the principles in a proportionate and pro-innovation way?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e20. Do you agree that a pooled team of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e experts would be the most effective way to address capability gaps and help regulators apply the principles?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003eSummary of questions 18-20:\u003c/p\u003e\u003cp\u003e134. Nearly all respondents agreed that regulators are best placed to lead the implementation of the principles, and that the government is best placed to provide oversight and delivery of the central functions. However, respondents argued that the government would need to improve regulator capability in order for this approach to be effective. Some respondents were concerned at the lack of a specific body to support the implementation and oversight of the proposed framework, with some asking for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e legislation and a new \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulator.\u003c/p\u003e\u003cp\u003e135. While regulators are broadly supportive of the proposed approach, over a quarter of those that responded to Q19 suggested that increased \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e expertise would help them effectively apply the principles within their existing remits. Overall, regulators reported different levels of technical expertise and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e capability. Some felt that greater organisational capacity and additional resources would help them undertake new responsibilities related to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and understand where and how \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is used in their domains.\u003c/p\u003e\u003cp\u003e136. Regulators also noted that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e presents coordination challenges across domains and sectors, with some emerging risks related to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e not falling clearly within a specific existing remit. Just over a quarter of regulators that responded to Q19 emphasised that close collaboration between regulators and the proposed central functions would help build meaningful sector-specific requirements and prevent duplication.\u003c/p\u003e\u003cp\u003e137. A majority of respondents agreed that a pooled team of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e experts would be the most effective way to address the different levels of capability across the regulatory landscape. Respondents advocated for a diverse and multi-disciplinary pool to bring together technical \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e expertise with sector-specific regulatory knowledge, industry specialists, and civil society. Respondents argued that this would ensure that regulators are considering a broad range of perspectives in their application of the cross-sectoral \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e principles.\u003c/p\u003e\u003cp\u003eResponse:\u003c/p\u003e\u003cp\u003e138. We are encouraged that respondents broadly agree with the proposed regulator-led approach for the implementation of the principles, with the government providing oversight and delivering the central function. As outlined in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper, our existing expert regulators are best placed to conduct detailed risk analysis and enforcement activities within their areas of expertise. We will continue to work closely with regulators to ensure that potential risks posed by \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e are sufficiently covered by our rule of law. In keeping with our iterative approach, we will seek to adapt the framework, including the regulatory architecture, if analysis proves this is necessary and effective.\u003c/p\u003e\u003cp\u003e139. As pointed out by respondents across the consultation, to regulate \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e effectively our regulators must have the right skills, tools, and expertise. To support regulator’s ability to adapt and respond to the risks and opportunities that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e presents in their domains, we are today announcing a £10 million investment to build technical upskilling. We will work closely with regulators to identify the most promising opportunities to leverage this funding, including designing a delivery model that can achieve the intended objectives more effectively than the central pool of expertise proposed in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper. In particular, regulator feedback has shown that we need to support them to develop tools and skills within their specific domains – albeit working collaboratively where appropriate – and deliver support that aligns with and supports their independence. As capability and resource varies across regulators, our intention is that this fund will particularly enable those regulators with less mature \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e expertise to conduct research and uncover foundational insights to develop or adapt practical tools to ensure compliance in an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-enabled future.\u003c/p\u003e\u003cp\u003e140. Further, as set out in the response to Professor Dame Angela McLean’s cross-cutting review of pro-innovation regulation of technologies\u003csup id=\"fnref:110\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:110\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 110]\u003c/a\u003e\u003c/sup\u003e, the government is also exploring how to further support regulators to develop the specialist skills necessary to regulate emerging technologies, including increased flexibility on pay and conditions. This builds on schemes already in place to support secondments between government departments, regulators, academia, and industry.\u003c/p\u003e\u003cp\u003e141. We acknowledge regulator’s concerns that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e can pose coordination challenges. In the white paper we proposed a number of centralised activities to support regulators and ensure that the regulatory landscape for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is consistent and cohesive. To facilitate cross-cutting collaboration and ensure that the overall regulatory framework functions as intended, we are developing our regulatory coordination activities. These coordination activities will sit in our central function in government alongside our \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risk assessment activities (see more detail in section 5.1.2). To support a coherent approach across sectors, we are also publishing initial guidance to regulators alongside this response on how to apply the cross-sectoral \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e principles within their existing remits.\u003c/p\u003e\u003cp\u003e142. We note respondents’ emphasis on transparency and the need for industry and civil society to have visibility of the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation framework. We agree that establishing feedback loops with industry, academia and civil society will be key to measuring the effectiveness of the framework. Our central function will engage stakeholders to ensure that a wide range of voices are heard and considered: providing clarity, building trust, ensuring interoperability, and informing the government of the need to adapt the framework.\u003c/p\u003e\u003ch3 id=\"tools-for-trustworthy-ai\"\u003e\n\u003cspan class=\"number\"\u003e6.6. \u003c/span\u003e Tools for trustworthy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e\n\u003c/h3\u003e\u003cp\u003e\u003cstrong\u003e21. Which non-regulatory tools for trustworthy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e would most help organisations to embed the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation principles into existing business processes?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003eSummary of question 21:\u003c/p\u003e\u003cp\u003e143. There was strong support for the use of technical standards and assurance techniques, with some respondents agreeing that both would help organisations to embed the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e principles into existing business processes. Many respondents praised the UK \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Standards Hub and the Centre for Data Ethics and Innovation’s (\u003cabbr title=\"Centre for Data Ethics and Innovation\"\u003eCDEI\u003c/abbr\u003e) work on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e assurance. While some respondents noted that businesses would have a smaller compliance burden if tools and processes were consistent across sectors, others noted the importance of additional sector-specific tools and processes. Respondents also suggested supplementing technical standards with case studies and examples of good practice.\u003c/p\u003e\u003cp\u003e144. Respondents argued that standardised tools and techniques for identifying and mitigating potential risks related to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e would also support organisations to embed the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e principles. Some identified assurance techniques such as impact and risk assessments, model performance monitoring, model uncertainty evaluations, and red teaming as particularly helpful for identifying \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks. A few respondents recommended assurance techniques that can be used to detect and prevent issues such as drift to mitigate risks related to data. While commending the role of tools for trustworthy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, a small number of respondents also expressed a desire for more stringent regulatory measures, such as statutory requirements for high risk applications of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e or a watchdog for foundation models.\u003c/p\u003e\u003cp\u003e145. Respondents felt that tools and techniques such as fairness metrics, transparency reports, and organisational \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e ethics guidelines can support the responsible use of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e while growing public trust in the technology. Respondents expressed the desire for third-party verification of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models through bias audits, consumer labelling schemes, and external certification against technical standards.\u003c/p\u003e\u003cp\u003e146. A few respondents noted the benefits of international harmonisation across \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance approaches for both organisations and consumers. Some endorsed interoperable technical standards for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, commending global standards development organisations (\u003cabbr title=\"Standards Development Organisations\"\u003eSDOs\u003c/abbr\u003e) such as the International Organization for Standardization (\u003cabbr title=\"International Organization for Standardization\"\u003eISO\u003c/abbr\u003e) and Institute of Electrical and Electronics Engineers (\u003cabbr title=\"Institute of Electrical and Electronics Engineers\"\u003eIEEE\u003c/abbr\u003e). Others noted the strength of a range of international work on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e including that by individual countries, such as the USA’s National Institute of Standards and Technology (\u003cabbr title=\"National Institute of Standards and Technology\"\u003eNIST\u003c/abbr\u003e) \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Risk Management Framework (\u003cabbr title=\"Risk Management Framework\"\u003eRMF\u003c/abbr\u003e) and Singapore’s \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Verify Foundation, along with work on international governance by multilateral bodies such as the Organisation for Economic Co-operation and Development (\u003cabbr title=\"Organisation for Economic Co-operation and Development\"\u003eOECD\u003c/abbr\u003e), United Nations (\u003cabbr title=\"United Nations\"\u003eUN\u003c/abbr\u003e), and \u003cabbr title=\"Group of Seven\"\u003eG7\u003c/abbr\u003e.\u003c/p\u003e\u003cp\u003eResponse:\u003c/p\u003e\u003cp\u003e147. We are pleased to see such strong support for the continued development and adoption of technical standards and assurance techniques for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. These tools will help organisations put our proposed regulatory principles into practice, innovate responsibly, and build public confidence. We recognise that, in some instances, it will be important to have assurance techniques and technical standards that are specific to a particular context, application, or sector. That is why, in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper, we set out a layered approach to technical standards, encouraging regulators to build on widely applicable sector-agnostic tools where appropriate\u003csup id=\"fnref:111\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:111\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 111]\u003c/a\u003e\u003c/sup\u003e.\u003c/p\u003e\u003cp\u003e148. We welcome praise for the UK \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Standards Hub and \u003cabbr title=\"Centre for Data Ethics and Innovation\"\u003eCDEI\u003c/abbr\u003e. Launched in October 2022, the Hub brings together the UK’s technical expertise on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e standards, including the Alan Turing Institute, British Standards Institution, and National Physical Laboratory, to provide training and information on the complex international \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e standards landscape. The \u003cabbr title=\"Centre for Data Ethics and Innovation\"\u003eCDEI\u003c/abbr\u003e published a Portfolio of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Assurance Techniques in June 2023 with examples from the real world to support the development of trustworthy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, which respondents indicated would be helpful\u003csup id=\"fnref:112\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:112\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 112]\u003c/a\u003e\u003c/sup\u003e. The Portfolio is also part of the \u003cabbr title=\"Organisation for Economic Co-operation and Development\"\u003eOECD\u003c/abbr\u003e’s Catalogue of Tools and Metrics for Trustworthy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, which shares the \u003cabbr title=\"Centre for Data Ethics and Innovation\"\u003eCDEI\u003c/abbr\u003e case-studies to an international audience. The \u003cabbr title=\"Centre for Data Ethics and Innovation\"\u003eCDEI\u003c/abbr\u003e also launched the “Fairness Innovation Challenge” in October to support the development of new socio-technical solutions to address bias and discrimination in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems\u003csup id=\"fnref:113\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:113\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 113]\u003c/a\u003e\u003c/sup\u003e. Today we are announcing that the Centre for Data Ethics and Innovation (\u003cabbr title=\"Centre for Data Ethics and Innovation\"\u003eCDEI\u003c/abbr\u003e) is changing its name to the Responsible Technology Adoption Unit to more accurately reflect its role within the Department for Science, Innovation and Technology (\u003cabbr title=\"Department for Science, Innovation and Technology\"\u003eDSIT\u003c/abbr\u003e) to develop tools and techniques that enable responsible adoption of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in the private and public sectors. This year, \u003cabbr title=\"Department for Science, Innovation and Technology\"\u003eDSIT\u003c/abbr\u003e will publish an “Introduction to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e assurance” to further promote the value of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e assurance.\u003c/p\u003e\u003cp\u003e149. We note that respondents would like to see more standardised tools and techniques to identify and manage \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risk. Ahead of the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit in November 2023, we published “Emerging processes for frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety” to help prompt a debate about good safety processes for advanced \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems look like\u003csup id=\"fnref:114\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:114\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 114]\u003c/a\u003e\u003c/sup\u003e. The document provides a snapshot of promising ideas, emerging processes, and associated practices in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety. It is intended as a point of reference to inform the development of frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e organisations’ safety policies as well as a companion for readers of these policies. It outlines early thinking on practices for innovation in frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e development, including model evaluations and red teaming, responsible capability scaling, and model reporting and information sharing. In 2024, we will encourage \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e companies to develop their \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety and responsible capability scaling policies. As part of this work, we will update our emerging processes guide by the end of the year. More widely, we note the development of relevant global technical standards which provide guidance on risk management related to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. For example, standard \u003cabbr title=\"International Organization for Standardization\"\u003eISO\u003c/abbr\u003e 42001 will help organisations manage their \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems in a trustworthy way.\u003c/p\u003e\u003cp\u003e150. In the white paper, we note that responding to risk and building public trust are key drivers for regulation. We therefore understand respondents’ emphasis on tools for building public trust as a key way to ensure responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation. The Responsible Technology Adoption Unit (formerly \u003cabbr title=\"Centre for Data Ethics and Innovation\"\u003eCDEI\u003c/abbr\u003e) within \u003cabbr title=\"Department for Science, Innovation and Technology\"\u003eDSIT\u003c/abbr\u003e has a specialist Public Insights team that regularly engages with the general public and affected communities to build a deep understanding of public attitudes towards \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e\u003csup id=\"fnref:115\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:115\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 115]\u003c/a\u003e\u003c/sup\u003e. These insights are used by \u003cabbr title=\"Department for Science, Innovation and Technology\"\u003eDSIT\u003c/abbr\u003e and wider government to align our regulatory approaches to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e with public values and foster trust in these technologies. \u003cabbr title=\"Department for Science, Innovation and Technology\"\u003eDSIT\u003c/abbr\u003e and the Central Digital and Data Office (\u003cabbr title=\"Central Digital and Data Office\"\u003eCDDO\u003c/abbr\u003e) have also developed the \u003cabbr title=\"Algorithmic Transparency Recording Standard\"\u003eATRS\u003c/abbr\u003e to help public sector organisations provide clear information about algorithmic tools they use to support decisions\u003csup id=\"fnref:116\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:116\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 116]\u003c/a\u003e\u003c/sup\u003e. Following a successful pilot of the standard, and publication of an approved cross-government version last year, we will now be making use of the \u003cabbr title=\"Algorithmic Transparency Recording Standard\"\u003eATRS\u003c/abbr\u003e a requirement for all government departments and plan to expand this across the broader public sector over time.\u003c/p\u003e\u003cp\u003e151. We agree with respondents that international cooperation on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance will be key to successfully mitigating \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risks and building public trust in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. The first ever \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit convened a group of representatives from around the globe to set a new path for collective international action to navigate the opportunities and risks of frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. We also continue to collaborate internationally on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance, both bilaterally and through several multilateral fora. For example, the UK plays an important role in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e discussions at the \u003cabbr title=\"United Nations\"\u003eUN\u003c/abbr\u003e, Council of Europe, \u003cabbr title=\"Organisation for Economic Co-operation and Development\"\u003eOECD\u003c/abbr\u003e, \u003cabbr title=\"Group of Seven\"\u003eG7\u003c/abbr\u003e, Global Partnership on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e (\u003cabbr title=\"Global Partnership on AI\"\u003eGPAI\u003c/abbr\u003e), and \u003cabbr title=\"Group of 20\"\u003eG20\u003c/abbr\u003e. Notably, the UK worked closely with \u003cabbr title=\"Group of Seven\"\u003eG7\u003c/abbr\u003e partners in negotiating the Codes of Conduct and Guiding Principles for the development of advanced \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems, as part of the Hiroshima \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Process. The UK fully supports developing \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e policy and technical standards in a globally inclusive, multi-stakeholder, open, and consensus-based way. We support UK stakeholders to participate in Standards Development Organisations (\u003cabbr title=\"Standards Development Organisations\"\u003eSDOs\u003c/abbr\u003e) to both leverage the benefits of global technical standards here in the UK and deliver global digital technical standards shaped by democratic values.\u003c/p\u003e\u003ch3 id=\"final-thoughts\"\u003e\n\u003cspan class=\"number\"\u003e6.7. \u003c/span\u003e Final thoughts\u003c/h3\u003e\u003cp\u003e\u003cstrong\u003e22. Do you have any other thoughts on our overall approach? Please include any missed opportunities, flaws, and gaps in our framework.\u003c/strong\u003e\u003c/p\u003e\u003cp\u003eSummary of question 22:\u003c/p\u003e\u003cp\u003e152. Some respondents felt that the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation framework set out in the white paper would benefit from more detailed guidance on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risks. Some wanted to see more stringent measures for severe risks, particularly related to the use of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in safety-critical contexts. Respondents suggested that the framework would be clearer if the government provided risk categories for certain uses of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e such as law enforcement and places of work. Other respondents stressed that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e can pose or accelerate significant risks related to privacy and data protection breaches, cyberattacks, electoral interference, misinformation, human rights infringements, environmental sustainability, and competition issues. A few respondents were concerned about the potential existential risk posed by \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. Many respondents felt that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies are developing faster than regulatory processes.\u003c/p\u003e\u003cp\u003e153. Some respondents argued that the success of the framework relies on sufficient coordination between regulators in order to provide a clear and consistent approach to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e across sectors and markets. Respondents also noted that different sectors face particular \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related benefits and risks, suggesting that the framework would need to balance the consistency provided by cross-sector requirements with the accuracy of sector-specific approaches. In particular, respondents flagged that any new rules or bodies to regulate \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e should build from the existing statutory remits of regulators and relevant regulatory standards. Respondents also noted that regulators would need to be adequately resourced with technical expertise and skills to implement the framework effectively.\u003c/p\u003e\u003cp\u003e154. Respondents consistently emphasised the importance of international harmonisation to effective \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation. Some respondents suggested that the UK should work towards an internationally aligned regulatory ecosystem for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e by developing a gold standard framework and promoting best practice through key multilateral channels such as the \u003cabbr title=\"Organisation for Economic Co-operation and Development\"\u003eOECD\u003c/abbr\u003e, \u003cabbr title=\"United Nations\"\u003eUN\u003c/abbr\u003e, \u003cabbr title=\"Global Partnership on AI\"\u003eGPAI\u003c/abbr\u003e, \u003cabbr title=\"Group of Seven\"\u003eG7\u003c/abbr\u003e, \u003cabbr title=\"Group of 20\"\u003eG20\u003c/abbr\u003e, and the Council of Europe. Respondents noted that divergent or overlapping approaches to regulating \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e would cause significant compliance burdens. Respondents argued that international cooperation can support responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation in the UK by creating clear and certain rules that allow investments to move across multiple markets. Respondents also suggested establishing bilateral working groups with key strategic partners to share expertise. Some respondents stressed that the UK’s pro-innovation approach should be delivered at pace to remain competitive with a fast-moving international landscape.\u003c/p\u003e\u003cp\u003eResponse:\u003c/p\u003e\u003cp\u003e155. We acknowledge that many respondents would like more detail on the implementation of the framework set out in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper, particularly regarding \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risks. We have already started to deliver the proposals set out in the white paper, working quickly to establish centralised, cross-economy risk assessment activities within the government to identify, measure, and mitigate risks. Building from this work, we published research on frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e capabilities and risks for discussion at the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit\u003csup id=\"fnref:117\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:117\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 117]\u003c/a\u003e\u003c/sup\u003e. It outlined initial evidence on the most advanced \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems and how their capabilities and risks may continue to develop. The significant uncertainty in the evidence highlights the need for further research.\u003c/p\u003e\u003cp\u003e156. This year, we will consult on a cross-economy risk register for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, seeking expert views on our risk assessment methodology and whether we have comprehensively captured \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risks. The \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute will advance the world’s knowledge of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety by carefully examining, evaluating, and testing advanced \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems. It will conduct fundamental research on how to keep people safe in the face of fast and unpredictable technological progress.\u003c/p\u003e\u003cp\u003e157. In the white paper, we proposed an adaptable, principles-based approach to regulating \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in order to keep pace with rapid technological change. We will use our risk assessment and monitoring and evaluation activities to continue to assess measures for the targeted, proportionate, and effective prevention and mitigation of any new and accelerated risks related to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, including those potentially posed by the development of the most powerful systems.\u003c/p\u003e\u003cp\u003e158. We agree that an effective framework for regulating \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e will need to carefully balance cross-sector consistency with sector specific needs in order to support responsible innovation. Our context-focused framework builds from the domain expertise of the UK’s regulators, ensuring that different industries benefit from existing regulatory knowledge. While this approach streamlines compliance within specific sectors, we recognise the need for consistency and coordination between regulators to create an easily navigable regulatory landscape for businesses and consumers. That is why, as we note in detail in our responses to questions on regulator capability and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e sandboxes and testbeds (sections 6.5 and 6.10), we have been focusing on building from the existing strengths of UK regulators by establishing a pilot advisory service for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovators through the \u003cabbr title=\"Digital Regulation Cooperation Forum\"\u003eDRCF\u003c/abbr\u003e, sharing guidance on implementation, and building common regulator capability.\u003c/p\u003e\u003cp\u003e159. Alongside our work to quickly deliver on the centralised risk assessment and regulatory capability and coordination activities, the UK has led the way in convening world leaders at the first ever \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit in order to establish an aligned approach to the most pressing risks related to the cutting-edge of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technology. Countries agreed to the Bletchley Declaration at the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit, recognising the need for international collaboration in understanding the risks and opportunities of frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e\u003csup id=\"fnref:118\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:118\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 118]\u003c/a\u003e\u003c/sup\u003e. We will deliver a groundbreaking International Report on the Science of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety to promote an evidence-based understanding of advanced \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e\u003csup id=\"fnref:119\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:119\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 119]\u003c/a\u003e\u003c/sup\u003e. Additionally, the UK, through the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Institute, will collaborate with other nations, including the US, to enhance our capability to research and evaluate \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks, underscoring our ability to drive change through international coordination on this critical topic. \u003c/p\u003e\u003cp\u003e160. Our work at the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit is complemented by multilateral engagement in other \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-focused forums, such as the \u003cabbr title=\"Group of Seven\"\u003eG7\u003c/abbr\u003e Hiroshima process, \u003cabbr title=\"Group of 20\"\u003eG20\u003c/abbr\u003e, \u003cabbr title=\"United Nations\"\u003eUN\u003c/abbr\u003e, \u003cabbr title=\"Global Partnership on AI\"\u003eGPAI\u003c/abbr\u003e, and Council of Europe. In multilateral engagements, we are working to leverage each forum’s strengths, expertise, and membership to prevent overlap or divergences with other regulatory systems, ensuring they are adding maximum value to global \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance discussions and the UK’s values and economic priorities. The UK is also pursuing bilateral cooperation with many partners, reflecting our commitment to interoperability and establishing international norms for responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation.\u003c/p\u003e\u003ch3 id=\"legal-responsibility-for-ai\"\u003e\n\u003cspan class=\"number\"\u003e6.8. \u003c/span\u003e Legal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e\n\u003c/h3\u003e\u003cp\u003e\u003cstrong\u003eL1. What challenges might arise when regulators apply the principles across different \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e applications and systems? How could we address these challenges through our proposed \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulatory framework?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eL2.i. Do you agree that the implementation of our principles through existing legal frameworks will fairly and effectively allocate legal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e across the life cycle?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eL.2.ii. How could it be improved, if at all?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eL3. If you work for a business that develops, uses, or sells \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, how do you currently manage \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risk including through the wider supply chain? How could government support effective \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risk management?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003eSummary of questions L1-L3:\u003c/p\u003e\u003cp\u003e161. While respondents praised the benefits of a principles-based approach, nearly half were concerned about potential coordination issues between regulators and consistency across sectors. Some were concerned about confusing interdependencies between the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation framework and existing legislation. Respondents asked for sector-based guidance from regulators, compliance tools, and regulator engagement with industry. Some respondents also pointed to the importance of international alignment and collaboration.\u003c/p\u003e\u003cp\u003e162. A majority of respondents disagreed that the implementation of the principles through existing legal frameworks would fairly and effectively allocate legal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e across the life cycle. Just under a third of respondents felt that the government should clarify \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related liability. However, there was not clear agreement about where liability should sit, with respondents noting a range of potential responsibilities for different actors across the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e life cycle. There was repeated acknowledgement of the complexity of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e value chains and the potential variations in use-cases. Some voiced concerns about gaps in existing legislation, including intellectual property, legal services, and employment law.\u003c/p\u003e\u003cp\u003e163. Around a quarter of respondents to L2.ii stated that new legislation and regulatory powers would be necessary to effectively allocate liability across the life cycle. Respondents stressed the importance of a legally responsible person for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e within organisations, with a few suggestions of an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e equivalent to Data Protection Officers. Some respondents wanted more detail on how the principles will be implemented through existing law, with a few recommending that regulatory guidance would clarify the landscape. A small number of respondents noted that the proposed central functions, including risk assessment, horizon scanning, and monitoring and evaluation, would help assess and adapt the framework to ensure that legal responsibility for new \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risks is adequately distributed. A couple of respondents also suggested pre-deployment measures such as licensing and pre-market approvals.\u003c/p\u003e\u003cp\u003e164. Nearly half of organisations that responded to L3 told us that they used risk assessment processes for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, with many building from sectoral best practice or trade body guidance. Respondents pointed to existing legal frameworks that capture \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risks, such as product safety and data protection laws, and stressed that any future \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e measures should avoid duplicating or contradicting existing rules. Respondents suggested that it would be useful for businesses to understand the government’s view on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related best practices, with some recommending a central guide on using \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safely. Some smaller businesses asked for targeted support to implement the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e principles.\u003c/p\u003e\u003cp\u003e165. Respondents consistently stressed the importance of transparency as a tool for education, awareness, consent, and contestability. Echoing answers to questions Q2 and F1, many respondents mentioned that organisations should be transparent about \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e use, outputs, and training data.\u003c/p\u003e\u003cp\u003eResponse:\u003c/p\u003e\u003cp\u003e166. We are pleased to note respondents’ broad support for a principles-based approach to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation that can provide proportionate oversight across the many potential applications and uses of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies. We agree with respondents that, as we implement the framework set out in the white paper, it is important to coordinate between regulators, sectors, existing legal frameworks, and the fast-moving international regulatory landscape. That is why we have been working at pace to establish the activities of the central function outlined in the white paper (for a detailed overview see section 5.1.2).\u003c/p\u003e\u003cp\u003e167. We note that there are still questions regarding how to fairly and effectively allocate legal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e across the life cycle. We also recognise that many responses endorsed further government intervention to ensure the fair and effective allocation of liability across the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e value chain. Responses stressed the complexity and variability of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e supply chains, with use-cases highlighting expansive ethical and technical questions. We agree that there is no easy answer to the allocation of legal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and we also agree that it is important to get liability and accountability for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e right in order to support innovation and public trust. Building on the commitment to examine foundation models in the white paper, we have focused our initial life cycle accountability work on highly capable general-purpose systems (for details see section 5.2).\u003c/p\u003e\u003cp\u003e168. We are also continuing to analyse how existing legal frameworks allocate accountability and legal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e across the life cycle. Our initial analysis suggests that a context-based approach to regulating \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e may not adequately address risks arising from highly capable general-purpose systems since a context-based approach does not effectively and fairly allocate accountability to developers of those systems. We are exploring a range of potential obligations targeted at the developers of these systems including those suggested by respondents such as pre-market permits, model licensing, accountability and governance frameworks, transparency measures, and changes to existing legal frameworks. As we continue to iterate the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation framework, we will consider introducing measures to effectively allocate accountability and fairly distribute legal responsibility to those in the life cycle best able to mitigate \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risks.\u003c/p\u003e\u003cp\u003e169. We are encouraged by the wide range of risk assessment and management processes that respondents told us they are already using. Our “Emerging processes for frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety” paper outlines a set of practices to inform the development of organisational \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety policies\u003csup id=\"fnref:120\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:120\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 120]\u003c/a\u003e\u003c/sup\u003e. It provides a snapshot of promising ideas and associated practices in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety today. As discussed in response to questions on the cross-sectoral principles (section 6.1), we acknowledge the broad support for measures on transparency and we will continue our work assessing whether and which measures provide the most meaningful transparency for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e end users and actors across the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e life cycle.\u003c/p\u003e\u003ch3 id=\"foundation-models-and-the-regulatory-framework\"\u003e\n\u003cspan class=\"number\"\u003e6.9. \u003c/span\u003e Foundation models and the regulatory framework\u003c/h3\u003e\u003cp\u003e\u003cstrong\u003eF1. What specific challenges will foundation models such as large language models (\u003cabbr title=\"large language models\"\u003eLLMs\u003c/abbr\u003e) or open-source models pose for regulators trying to determine legal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e outcomes?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eF2. Do you agree that measuring compute provides a potential tool that could be considered as part of the governance of foundation models?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eF3. Are there other approaches to governing foundation models that would be more effective?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003eSummary of questions F1-F3:\u003c/p\u003e\u003cp\u003e170. While respondents supported the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation framework set out in the white paper, many were concerned that foundation models may warrant a bespoke regulatory approach. Some respondents noted that foundation models are characterised by their technical complexity and stressed their potential to underpin many different applications across multiple sectors. Nearly a quarter of respondents emphasised that foundation models make it difficult to determine legal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e outcomes and shared hypothetical use-cases where both upstream and downstream actors are at fault. Respondents stressed that technical opacity, complex supply chains, and information asymmetries prevent sufficient explainability, accountability, and risk assessment for foundation models.\u003c/p\u003e\u003cp\u003e171. Around a fifth of respondents expressed concerns about how foundation models use data, including whether data is of adequate quality, appropriate for downstream applications, compliant with existing law, and sourced ethically. Some stated that it is not clear who is responsible for deciding whether or not data is appropriate to a given application. Respondents stressed that training data currently lacks a clear definition, technical standards, and benchmark measurements.\u003c/p\u003e\u003cp\u003e172. Some respondents noted concerns regarding wider access to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, including open source, leaking, or malicious use of models. However, a similar number of respondents noted the importance of open source to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation, transparency, and trust.\u003c/p\u003e\u003cp\u003e173. Half of respondents felt compute was an inadequate proxy for governance requirements, with some recommending assessing models by their capabilities and applications instead. Respondents felt that model verification measures, such as audits and evaluations, would be effective, with some suggesting these should be mandatory requirements. A few noted the importance of downstream monitoring or post-market surveillance.\u003c/p\u003e\u003cp\u003e174. About a third of respondents supported governance measures including tools for trustworthy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e such as technical standards and assurance. One respondent suggested a pre-deployment sandbox. A few supported moratoriums, bans, or limits. A small number of respondents suggested that contracts, licences, user agreements, and (cyber) security measures could be used to govern foundation models.\u003c/p\u003e\u003cp\u003eResponse:\u003c/p\u003e\u003cp\u003e175. We acknowledge the range of challenges that respondents have raised in regard to foundation models and note the particular attention given to the core characteristics or features of foundation models such as technical opacity and complexity. We also recognise that challenges arise from the fact that foundation models can be broad in their potential applications and, as such, can cut across sectors and impact upon a range of risks. Our analysis shows that many regulators can struggle to enforce existing rules and laws on the developers of highly capable general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems within their current statutory remits in a way that effectively mitigates risk.\u003c/p\u003e\u003cp\u003e176. In response to repeated calls for specific regulatory interventions targeted at foundation models, we have been exploring the impact of foundation models on life cycle accountability for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. In the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper, we stated that legal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e should sit with the actor best able to mitigate any potential risks it poses. Our assessment suggests that, despite their ability to mitigate risks when designing and developing \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, the organisations building highly capable general-purpose systems are currently unlikely to be impacted by existing rules and laws in a way that sufficiently mitigates risk. That is why we are exploring options for targeted, proportionate interventions focusing on these systems and the risks that they present. We have been assessing measures to mitigate risk during the design, training, and development of highly capable general-purpose systems. We have also been exploring options for ensuring effective accountability, including legally mandated obligations, while avoiding cumbersome red-tape.\u003c/p\u003e\u003cp\u003e177. We note respondent views that compute is an imperfect proxy for foundation model capability. As part of our work exploring the right guardrails for highly capable general-purpose systems, we are examining how best to scope any regulatory requirements based on model capabilities, and the risks associated with these, wherever possible. But we recognise that, in some cases, controls might need to be in place before a model’s capability is known. In these cases, limited and careful use of proxies may be necessary to target regulatory requirements to only those systems that pose the most significant potential risks. Our early analysis indicates that initial thresholds could be based on forecasts of capabilities using a combination of two proxies: compute and capability benchmarking. However there might need to be a range of thresholds. For more detail, see section 5.2.\u003c/p\u003e\u003cp\u003e178. To provide greater clarity on best practices for responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation – including using data – we published a set of emerging safety processes for frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e companies for the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit in 2023\u003csup id=\"fnref:121\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:121\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 121]\u003c/a\u003e\u003c/sup\u003e. The document consolidates emerging thinking in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety and has been written for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e organisations and those who want to better understand their safety policies. We will update this guide by the end of the year and continue to encourage \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e companies to develop best practices (see section 5.2.2 for detail).\u003c/p\u003e\u003cp\u003e179. We acknowledge respondents’ views on both the value and risks of open source \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. Open access can provide wide benefits, including helping to mitigate some of the risks caused by highly capable general-purpose systems. However, open release can also exacerbate the risk of misuse. We believe that all powerful and potentially dangerous systems should be thoroughly risk-assessed before being released. We will continue to monitor and assess the impacts of open model access on risk. We will also carefully consider the impact of any potential measures to regulate open source systems on competition, innovation, and wider risk mitigation.\u003c/p\u003e\u003cp\u003e180. As set out in section 5.2, we will continue our technical policy analysis to refine our thinking on highly capable general-purpose systems in the context of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation and life cycle accountability. We will continue to engage with external experts on a range of challenging topics such as how effective voluntary measures could be at mitigating risks and the right scope of any additional regulatory interventions including proxies and capability thresholds. We will also continue to examine questions related to accountability and liability, including the extent to which existing laws and regulators can “reach” through the value chain to target the developers of highly capable general-purpose systems and the potential impact of open release. We will also engage with regulators to learn from their existing work on this topic. For example, we will continue to engage with the \u003cabbr title=\"Competition and Markets Authority\"\u003eCMA\u003c/abbr\u003e on their work on foundation models.\u003c/p\u003e\u003ch3 id=\"ai-sandboxes-and-testbeds\"\u003e\n\u003cspan class=\"number\"\u003e6.10. \u003c/span\u003e \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e sandboxes and testbeds\u003c/h3\u003e\u003cp\u003e\u003cstrong\u003eS1. To what extent would the sandbox models described in section 3.3.4 support innovation?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eS2. What could government do to maximise the benefit of sandboxes to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovators?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eS3. What could government do to facilitate participation in an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulatory sandbox?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eS4. Which industry sectors or classes of product would most benefit from an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e sandbox?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003eSummary of questions S1-S4:\u003c/p\u003e\u003cp\u003e181. Overall, respondents were strongly supportive of a regulatory sandbox for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. The highest proportion of respondents agreed that the “multiple sector, multiple regulator” and “single sector, multiple regulator” sandbox models would be most likely to support innovation, stating that the cross-sectoral or cross-regulator basis would help develop effective guidance in response to live issues, harmonise rules, and coordinate implementation of the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation framework. While there was no majority consensus on a specific sector that would most benefit from a sandbox, the largest proportion of question respondents stated that healthcare and medical devices would most benefit from an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e sandbox, followed by financial services and transport.\u003c/p\u003e\u003cp\u003e182. Some respondents suggested collaborating with the wider \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e ecosystem to maximise the benefit of sandboxes to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovators. Many recommended building on the existing strengths of the UK regulatory landscape, such as the \u003cabbr title=\"Digital Regulation Cooperation Forum\"\u003eDRCF\u003c/abbr\u003e. Linked to this, a few respondents noted that an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulatory sandbox presents an opportunity for the UK to demonstrate global leadership in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation and technical standards by sharing findings and best practice internationally.\u003c/p\u003e\u003cp\u003e183. Some respondents recommended making information accessible to maximise the benefit of the sandbox to participants and the wider \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e ecosystem. Respondents wanted participation pathways, training, tools, and other resources to be technically and financially accessible. Many respondents noted that accessible guidance and tools would allow organisations to engage with the sandbox. In particular, respondents emphasised the benefits of accessible information for smaller businesses and start-ups who are new to the regulatory process. Respondents advocated for regular reporting on sandbox processes, evidence, findings, and outcomes to encourage “business-as-usual” best practices for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e across the wider ecosystem.\u003c/p\u003e\u003cp\u003e184. Respondents noted the importance of reducing the administrative burden on smaller businesses and start-ups to lower the barrier to entry for those with less organisational resources. Some noted that financial support would help ensure that smaller businesses and start-ups could participate in resource-intensive research and development focused \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e sandboxes. Respondents felt that sharing evidence, guidance, and tools would ensure the wider \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e ecosystem benefitted from the sandbox. Some suggested access to datasets or product accreditation schemes would incentivise participation in supervised test environment sandboxes.\u003c/p\u003e\u003cp\u003eResponse:\u003c/p\u003e\u003cp\u003e185. The response to the consultation – which aligns with independent research commissioned through the Regulators’ Pioneer Fund – has helped to inform the government’s decision to fund a pilot multi-regulator advisory service offered by the \u003cabbr title=\"Digital Regulation Cooperation Forum\"\u003eDRCF\u003c/abbr\u003e: the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and Digital Hub. In particular, it has helped to clarify that a new regulatory service is likely to add most value supporting \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovators from a range of sectors to navigate the multiple regulatory regimes that govern the use of cross-cutting \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e products and services, rather than through targeting one specific regulatory remit or regulated sector.\u003c/p\u003e\u003cp\u003e186. The \u003cabbr title=\"Digital Regulation Cooperation Forum\"\u003eDRCF\u003c/abbr\u003e \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and Digital Hub brings together four of the most critical regulators of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and digital technologies, including the \u003cabbr title=\"Competition and Markets Authority\"\u003eCMA\u003c/abbr\u003e, \u003cabbr title=\"Information Commissioner's Office\"\u003eICO\u003c/abbr\u003e, \u003cabbr title=\"Office of Communications\"\u003eOfcom\u003c/abbr\u003e, and the Financial Conduct Authority (\u003cabbr title=\"Financial Conduct Authority\"\u003eFCA\u003c/abbr\u003e). Together these regulators are responsible for overseeing some of the most significant regulatory regimes that govern \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e products, whether cross-economy (data protection, competition and consumer regulation) or sectoral (financial services, telecommunications and broadcasting).\u003c/p\u003e\u003cp\u003e187. Respondents to the consultation also emphasised the importance of making information and resources relating to the sandbox accessible in order to maximise its benefits. Respondents noted the need to reduce the compliance burden for smaller businesses and start-ups in particular. Again, these considerations are central to the design and operation of the \u003cabbr title=\"Digital Regulation Cooperation Forum\"\u003eDRCF\u003c/abbr\u003e \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and Digital Hub. In addition to providing tailored support to participating innovators that will be accessed via a simple online application process, the Hub will also publish anonymised case-studies and guidance to support a broader pool of innovators facing similar compliance challenges. Our research has indicated that a repository of use cases such as this will be a particularly effective means of amplifying the outreach and impact of such a pilot.\u003c/p\u003e\u003cp\u003e188. We note that some respondents suggested that additional incentives such as product accreditation or access to data would encourage participation in a sandbox for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. These additional incentives would best suit a supervised test environment sandbox model. As the \u003cabbr title=\"Digital Regulation Cooperation Forum\"\u003eDRCF\u003c/abbr\u003e’s \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and Digital Hub pilot phase will focus on providing compliance support, these additional incentives will not be included. However, we are committed to reviewing how the service needs to develop – and what further measures are necessary to support \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and digital innovators – in the light of the pilot findings and further feedback from stakeholders.\u003c/p\u003e" } }, { "@type": "Question", "name": "Annex A: method and engagement", "url": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#annex-a-method-and-engagement", "acceptedAnswer": { "@type": "Answer", "url": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#annex-a-method-and-engagement", "text": "\u003ch3 id=\"consultation-method-and-engagement-summary\"\u003eConsultation method and engagement summary\u003c/h3\u003e\u003cp\u003e1. With the publication of the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper on 29 March 2023, we held a formal 12-week public consultation that closed on 21 June 2023. In total, we heard from over 545 different individuals and organisations.\u003c/p\u003e\u003cp\u003e2. Stakeholders were invited to submit evidence in response to 33 questions on the government’s policy proposals for a regulatory framework for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. Stakeholders were invited to submit evidence through an online survey, email, or post. In total, we received 409 responses in writing. Removing 50 duplicates and blanks left 359 written submissions. See \u003cstrong\u003eWritten submissions\u003c/strong\u003e below for more detail.\u003c/p\u003e\u003cp\u003e3. We also proactively engaged with 364 individuals through roundtables, technical workshops, bilaterals, and a programme of on-going regulator engagement. Our roundtables sought the views of stakeholders that we might hear from less often with topics including the impact of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e on marginalised communities, public trust, and citizen perspectives. We also held roundtables focused on smaller businesses and the open source community. More detail can be found in the \u003cstrong\u003eEngagement method\u003c/strong\u003e and \u003cstrong\u003eEngagement findings\u003c/strong\u003e sections below.\u003c/p\u003e\u003ch3 id=\"method-for-analysing-written-submissions\"\u003eMethod for analysing written submissions\u003c/h3\u003e\u003cp\u003e4. We received written consultation responses from organisations and individuals through an online survey and email. Of the total 409 responses, we received 232 through our online survey and 177 by email.\u003c/p\u003e\u003cp\u003e5. Of the 33 questions, 12 were closed questions with predefined response options on the online survey. We manually coded submissions by email that explicitly responded to these closed questions to follow the Likert-scale structure. The remaining 21 questions invited free text qualitative responses and each response was individually analysed and manually coded. As such, quantitative analysis represents all stakeholders who answered a specific question through email or the online survey. Not all respondents answered every question and we present our findings as an approximate proportion of responses to the question.\u003c/p\u003e\u003cp\u003e6. In accordance with our privacy notice\u003csup id=\"fnref:122\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:122\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 122]\u003c/a\u003e\u003c/sup\u003e and online survey privacy agreement, only those individuals and organisations who submitted evidence through our online survey and consented to our privacy agreement will have their names published in the list of respondents (see Annex B).\u003c/p\u003e\u003cp\u003e7. Respondents to the online survey self-selected an organisation type and sector. We manually assigned organisation types and sectors to respondents who submitted written evidence through email. After removing blanks and duplications, we received responses from across 8 organisation types and 18 sectors. Chart M1 shows response numbers by organisation type. The majority of responses came from industry, business, trade unions, and trade associations. This is followed by individuals not representing an organisation and then research groups, universities, and think tanks.\u003c/p\u003e\u003cp\u003e8. \u003cstrong\u003eChart M1: \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper consultation respondents by organisation type\u003c/strong\u003e\u003c/p\u003e\u003ctable class=\"js-barchart-table mc-stacked mc-auto-outdent\"\u003e\n \u003cthead\u003e\n \u003ctr\u003e\n \u003cth scope=\"col\"\u003eDepartment\u003c/th\u003e\n \u003cth scope=\"col\"\u003eTotal\u003c/th\u003e\n \u003ctd\u003e\u003c/td\u003e\n \u003c/tr\u003e\n \u003c/thead\u003e\n \u003ctbody\u003e\n \u003ctr\u003e\n \u003ctd\u003eIndustry, business, trade union, association\u003c/td\u003e\n \u003ctd\u003e132\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eIndividuals\u003c/td\u003e\n \u003ctd\u003e63\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eResearch organisation, university, think tank\u003c/td\u003e\n \u003ctd\u003e39\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eSmall or Medium sized Enterprises (\u003cabbr title=\"small and medium-sized enterprises\"\u003eSMEs\u003c/abbr\u003e)\u003c/td\u003e\n \u003ctd\u003e37\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eCharity, non-profit, social, civic, activist\u003c/td\u003e\n \u003ctd\u003e28\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eOther\u003c/td\u003e\n \u003ctd\u003e24\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eRegulators\u003c/td\u003e\n \u003ctd\u003e23\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003elegal services or professional advisory body\u003c/td\u003e\n \u003ctd\u003e13\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003c/tbody\u003e\n\u003c/table\u003e\u003cp\u003e9. \u003cstrong\u003eChart M2: \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper consultation respondents by sector\u003c/strong\u003e\u003c/p\u003e\u003ctable class=\"js-barchart-table mc-stacked mc-auto-outdent\"\u003e\n \u003cthead\u003e\n \u003ctr\u003e\n \u003cth scope=\"col\"\u003eSector\u003c/th\u003e\n \u003cth scope=\"col\"\u003eTotal\u003c/th\u003e\n \u003ctd\u003e\u003c/td\u003e\n \u003c/tr\u003e\n \u003c/thead\u003e\n \u003ctbody\u003e\n \u003ctr\u003e\n \u003ctd\u003e\n\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, digital, and technology\u003c/td\u003e\n \u003ctd\u003e75\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eOther\u003c/td\u003e\n \u003ctd\u003e69\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eArts and entertainment\u003c/td\u003e\n \u003ctd\u003e31\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eFinancial services and insurance\u003c/td\u003e\n \u003ctd\u003e22\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eEducation\u003c/td\u003e\n \u003ctd\u003e22\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eResearch and development\u003c/td\u003e\n \u003ctd\u003e22\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eHealthcare\u003c/td\u003e\n \u003ctd\u003e20\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eLegal services\u003c/td\u003e\n \u003ctd\u003e19\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eIT\u003c/td\u003e\n \u003ctd\u003e18\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eUndiclosed\u003c/td\u003e\n \u003ctd\u003e14\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003ePublic sector\u003c/td\u003e\n \u003ctd\u003e14\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eRegulation\u003c/td\u003e\n \u003ctd\u003e13\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eCommunications\u003c/td\u003e\n \u003ctd\u003e8\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eSecondary sectors\u003c/td\u003e\n \u003ctd\u003e5\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eTransportation\u003c/td\u003e\n \u003ctd\u003e4\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003eReal estate\u003c/td\u003e\n \u003ctd\u003e2\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003ctr\u003e\n \u003ctd\u003ePrimary sectors\u003c/td\u003e\n \u003ctd\u003e1\u003c/td\u003e\n \u003ctd\u003e \u003c/td\u003e\n \u003c/tr\u003e\n \u003c/tbody\u003e\n\u003c/table\u003e\u003cp\u003eM2 Note: Primary sectors include extraction of raw materials, farming, and fishing. Secondary sectors include utilities, construction, and manufacturing.\u003c/p\u003e\u003cp\u003e10. The sector breakdown in Chart M2 shows that the biggest number of responses came from the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, digital, and technology industry. This was followed by respondents who selected “other” and then those in the arts and entertainment sector. Further analysis of “other” responses suggests that these responses were often from individuals not representing an organisation and included students.\u003c/p\u003e\u003cp\u003e11. As these demographics indicate, this sample, as with all written consultation samples, may not be representative of public opinion as some groups are over or under represented.\u003c/p\u003e\u003cp\u003e12. In particular, we note that responses received from a number of creative industries stakeholders were either identical or very similar. These responses largely focused on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and copyright. These responses were analysed and included in the same way as all other responses.\u003c/p\u003e\u003cp\u003e13. 89 emailed pieces of evidence followed the question structure of our online survey. These were analysed alongside responses from the survey to inform quantitative analysis. After removing duplicate responses, we included 66 emailed responses in our analysis.\u003c/p\u003e\u003cp\u003e14. 88 emailed responses provided evidence beyond the scope of our consultation questions or without explicit reference to the questions. We analysed these submissions individually. While our findings from this analysis informs our overall response, we do not include these responses within our quantitative analysis as they do not explicitly answer our consultation questions. Where relevant, we have used insights from these responses to inform our qualitative question summaries. After removing duplicate responses, we included 84 of these in our qualitative analysis.\u003c/p\u003e\u003cp\u003e15. We received 33 duplicate responses that were sent twice through either the online survey or email. We received requests for 4 of these duplications to be deleted on grounds they were incorrect and superseded by a later response. These duplicates were removed from analysis entirely. The remaining 29 duplicates were responses sent by both online survey and email. Where appropriate, we removed either the email or survey response from our quantitative analysis to avoid skewing counts with duplicate submissions. However, in consideration of additional detail given, we analysed both responses to weave any additional insights into our overall qualitative analysis. A further 17 written responses were discounted from analysis entirely on the grounds that they were blank or contained spam. After reviewing and cross-checking responses, we discounted 50 written submissions from the final analysis to avoid overcounting blanks, spam, and duplicate responses. That left 359 submissions of which 209 were received through the online survey and 150 by email.\u003c/p\u003e\u003cp\u003e16. We use illustrative qualitative language such as “many”, “some”, and “a few” to summarise the written responses we received to our consultation. These descriptions are intended to provide an indication of the extent that a particular theme or sentiment was raised by respondents. Not all respondents answered every question. We refer to approximate amounts of respondents to each question, including “a half”, “a quarter”, or “a third”. We use the terms “nearly all” or “most” when a substantial majority of respondents made a particular argument or shared a sentiment. We use the terms “a majority” or “over half” to show when a point was shared by over 50% of respondents. We use “many” when lots of respondents raised a similar point but the theme or sentiment was not shared by over half of respondents. We use “some” to indicate when a theme or sentiment was shared by between a tenth and a fifth of respondents. We use “a few” when a smaller number of respondents made a similar point. We use a “small number” to describe when less than 10 respondents raised a point, specifying if this is “one” or “two” (“a couple”).\u003c/p\u003e\u003ch3 id=\"engagement-method\"\u003eEngagement method\u003c/h3\u003e\u003cp\u003e17. We held 19 roundtables engaging 278 individuals representing a range of perspectives and organisation types including \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e industry, digital, and technology organisations, small businesses and start-ups, companies that use \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, the open source community, trade bodies and unions, legal services, financial services, creative industries, academics, think tanks, research organisations, regulators, government departments, the public sector, charities and advocacy groups, citizens, marginalised communities, and wider civil society.\u003c/p\u003e\u003cp\u003e18. Some roundtables focused on hearing from regulators or stakeholders within a specific sector, including education, transport, financial services, legal services, and health and social care. Others focused on technical elements of the regulatory framework such as methods for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e verification, liability, and tools for trustworthy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, including technical standards. Some discussions were designed to understand the views of stakeholders we might hear from less often: one explored the impact of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e on marginalised communities, another examined the role of public trust, two further roundtables focused on the perspectives of small businesses and the open source community, and the Minister for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and Intellectual Property, Viscount Camrose, chaired a citizens roundtable during London Tech Week. Other topics included \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety, international interoperability, approaches to responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation in industry, and the \u003cabbr title=\"UK Research and Innovation\"\u003eUKRI\u003c/abbr\u003e’s \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Technology Mission.\u003c/p\u003e\u003cp\u003e19. We are grateful to the partners who worked with us to organise roundtables and workshops including \u003cabbr title=\"Centre for Data Ethics and Innovation\"\u003eCDEI\u003c/abbr\u003e, the Department for Education (\u003cabbr title=\"Department for Education\"\u003eDfE\u003c/abbr\u003e), the Department of Health and Social Care (\u003cabbr title=\"Department of Health and Social Care\"\u003eDHSC\u003c/abbr\u003e), the Department for Transport (DfT), the Ministry of Justice (MOJ), UK Research and Innovation (\u003cabbr title=\"UK Research and Innovation\"\u003eUKRI\u003c/abbr\u003e), the British Computer Society (\u003cabbr title=\"British Computer Society\"\u003eBCS\u003c/abbr\u003e), Hogan Lovells, Innovate Finance, the Ada Lovelace Institute, the Alan Turing Institute, Open UK, the British Standards Institution (\u003cabbr title=\"British Standards Institution\"\u003eBSI\u003c/abbr\u003e), and the University of Bath \u003cabbr title=\"Accountable, Responsible and Transparent Artificial Intelligence\"\u003eART-AI\u003c/abbr\u003e.\u003c/p\u003e\u003cp\u003e20. Alongside this programme of roundtable discussions and technical workshops, we engaged with 42 stakeholders through external engagements where we presented the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation framework outlined in the white paper. We also held 28 bilaterals and held meetings with 16 regulators as part of our on-going work to support implementation. We include insights from this engagement throughout the consultation response.\u003c/p\u003e\u003ch3 id=\"engagement-findings\"\u003eEngagement findings\u003c/h3\u003e\u003cp\u003e21. In this section, we provide a brief overview of our roundtables and workshops, summarising insights into four areas based on roundtable focus and participation from:\u003c/p\u003e\u003cul\u003e\n \u003cli\u003eregulators.\u003c/li\u003e\n \u003cli\u003eindustry.\u003c/li\u003e\n \u003cli\u003ecivil society.\u003c/li\u003e\n \u003cli\u003eresearch organisations.\u003c/li\u003e\n\u003c/ul\u003e\u003ch4 id=\"regulators\"\u003eRegulators\u003c/h4\u003e\u003cp\u003e22. We held six roundtables with regulators to understand existing capabilities and needs, including how the approach set out in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper would be implemented into specific sectors including health and social care, justice, education, and transport.\u003c/p\u003e\u003cp\u003e23. Regulators reported varying levels of in-house \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e knowledge and capability, with most supporting central measures to enhance technical expertise. Some agreed that a pool of expertise could enhance regulatory capacity, while others suggested that the proposed central function could provide guidance and training materials for regulators.\u003c/p\u003e\u003cp\u003e24. Regulators were broadly supportive of the central function outlined in the white paper, emphasising that they could serve as a useful point of contact for regulators. However, regulators also stressed that the central function should not infringe on the independence or existing statutory remits of regulators, suggesting that any guidance to regulators on the implementation of the principles should not impede, duplicate, or contradict regulators’ current mandates and work.\u003c/p\u003e\u003cp\u003e25. Participants at the roundtables emphasised that regulators need adequate resources, endorsing government investment in technical capability and capacity. Some noted that the government may also need to introduce new regulatory powers in order for the framework to be effective, stating that achieving meaningful transparency and contestability may require the government to mandate disclosure from developers and deployers of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e at set points.\u003c/p\u003e\u003cp\u003e26. Participants raised several challenges to effective regulator oversight specific to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e including unknown and changing functional boundaries, technical obscurity, unpredictable environments, lack of human oversight or input, and highly iterative technological life cycles. Regulators suggested that collaboration between regulators, safety engineers, and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e experts is key to creating robust verification measures that prevent, reduce, and mitigate risks.\u003c/p\u003e\u003cp\u003e27. While regulators stated that the principles provide useful common ground across sectors, they noted that sector-specific analysis would be necessary to identify gaps in the framework. Some noted that sector specific use-cases would help regulators apply the principles in their respective domains.\u003c/p\u003e\u003ch4 id=\"industry\"\u003eIndustry\u003c/h4\u003e\u003cp\u003e28. We heard from a range of industry stakeholders at seven roundtable events with topics ranging from international interoperability, responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in industry, general-purpose \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, and governance and technical standards needs.\u003c/p\u003e\u003cp\u003e29. Some participants were concerned that market imbalances were preventing innovation and competition across the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e ecosystem. In particular, participants argued that more accessible, traceable, and accountable data would promote innovation, noting that smaller companies often have to rely on existing market leaders or lower quality datasets due to the lack of affordable commercial, proprietary datasets. Participants suggested that clear standards for data and more equitable access to higher quality datasets would stimulate \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation across the wider ecosystem and prevent incumbent advantages.\u003c/p\u003e\u003cp\u003e30. Participants also noted that some of the potential measures to regulate \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e could allow current market leaders to further entrench their advantages and increase existing market imbalances. Participants noted that smaller businesses and the open source community could face a significant compliance burden, with some suggesting that regulatory sandboxes should be used to test the impact of regulation. While some suggested that legal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e should be allocated to earlier stages in the life cycle, others warned that placing the legal responsibility for downstream applications on open source developers would severely limit innovation as they would not be able to account for the many potential uses of open source code.\u003c/p\u003e\u003cp\u003e31. There was no consensus on whether licensing requirements for foundation models would effectively encourage responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation or, instead, concentrate market power among a few established companies. A few participants noted that practical guidance on implementation and use-cases would support organisations to apply the principles. Some participants noted a licensing framework that only allowed open access to some parts of an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e system’s code could retain some of the benefits of the information sharing and transparency that defines open source.\u003c/p\u003e\u003cp\u003e32. Some participants stated that it is not clear whose job it is to regulate \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, advocating for a new, \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-specific regulator or a clear lead regulator. Participants emphasised the importance of technical expertise to effective regulation.\u003c/p\u003e\u003cp\u003e33. Participants also noted the important role of international interoperability, insurance, technical standards, and transparency in market success for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.\u003c/p\u003e\u003ch4 id=\"civil-society-and-public-trust\"\u003eCivil society and public trust\u003c/h4\u003e\u003cp\u003e34. Three roundtables were held with smaller businesses, civil society stakeholders, and special interest groups to discuss public trust and the impact of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e on citizens and marginalised communities.\u003c/p\u003e\u003cp\u003e35. Participants emphasised that fairness and inclusivity were key to realising the benefits of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e for everyone. Participants noted the importance of diversity in regard to the data used to train and build \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, as well as the teams who develop, deploy, and regulate \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. Participants suggested co-creation and collaboration with marginalised communities would ensure that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e could create benefits for everyone.\u003c/p\u003e\u003cp\u003e36. Participants also stressed that organisations using \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e not only need to be transparent about when and how \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is used but should also make explanations accessible to different groups. Participants noted that, while \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e can offer benefits to marginalised communities, these populations often face a disproportionate negative impact from \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. Participants called for more education on the use of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e on the grounds that there is currently a significant lack of consumer awareness, organisational knowledge, and accessible redress routes.\u003c/p\u003e\u003cp\u003e37. Participants noted that regulators have a key role to play in improving access to contest and seek redress for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related harms. Participants emphasised that regulators require adequate funding and resources in order to achieve this. Participants strongly supported a central ombudsman for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e to improve the accessibility of high-quality legal advice on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. Many noted that legal advice on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is currently expensive, hard to access, and sometimes given by unregulated providers outside of the legal profession. Participants also noted that the ombudsman would likely receive a large number of small-scale complaints, which they should be adequately equipped to deal with.\u003c/p\u003e\u003cp\u003e38. Participants also advocated for the importance of specific safeguards for young people including potential changes to existing statutory mechanisms such as those for data protection and equality.\u003c/p\u003e\u003ch4 id=\"academia-research-organisations-and-think-tanks\"\u003eAcademia, research organisations, and think tanks\u003c/h4\u003e\u003cp\u003e39. We held three events to hear from academics, research organisations, and think tanks on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safety, legal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, and the \u003cabbr title=\"UK Research and Innovation\"\u003eUKRI\u003c/abbr\u003e’s \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Technology Mission.\u003c/p\u003e\u003cp\u003e40. Participants suggested differentiating the types of risk posed by \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, noting that both immediate and long term risks would need to be factored into any safety measures for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. Participants felt that sector-specific analysis should inform assessments of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risks. Participants noted that the technical obscurity of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e can make it difficult for organisations and regulators to determine the cause of any harms that arise. Participants emphasised that, in order to prevent harms, pre-deployment measures are key to ensuring that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is safe for market release.\u003c/p\u003e\u003cp\u003e41. Participants argued that high quality regulation can help \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e move quickly and safely from development to market. Participants argued that there was a need for greater technical knowledge across government and regulators, along with better \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e skills across the wider ecosystem. Some called for the certification of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e engineers and developers to enhance public confidence, while another promoted the certification of institutional leads responsible for decisions related to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. There was no consensus on whether a new, central regulator for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e or existing regulators would implement the proposed framework most effectively. However, participants agreed that aligning regulatory guidance and sharing expertise across sectors would build compliance capability. Participants suggested a “mixed economy” of regulation, with statutory requirements to ensure rules worked effectively.\u003c/p\u003e\u003cp\u003e42. Participants noted that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e life cycles are varied and complex. Participants wanted the government to define actors across the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e life cycle and determine corresponding obligations to clarify the landscape. However, there was no agreement on the best way to do this with participants suggesting actors may be defined by their function (as in data protection regulation), market power or benefit (as in digital markets regulation), or proximity to and reasonable foreseeability of risks (as in product safety legislation). While some participants wanted to see more stringent responsibilities for foundation model developers, others warned that too too narrow a focus could mean that other \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related opportunities might be missed.\u003c/p\u003e" } }, { "@type": "Question", "name": "Annex B: List of consultation respondents", "url": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#annex-b-list-of-consultation-respondents", "acceptedAnswer": { "@type": "Answer", "url": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#annex-b-list-of-consultation-respondents", "text": "\u003ch3 id=\"list-of-consultation-respondents\"\u003eList of consultation respondents\u003c/h3\u003e\u003cp\u003e1. We are grateful to all the individuals and organisations who shared their insights with us over the course of the consultation period.\u003c/p\u003e\u003cp\u003e2. Our \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation framework is intended to be collaborative and we will continue to work closely with regulators, academia, civil society, and the public in order to monitor and evaluate the effectiveness of our approach.\u003c/p\u003e\u003cp\u003e3. In accordance with our privacy notice\u003csup id=\"fnref:123\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:123\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 123]\u003c/a\u003e\u003c/sup\u003e and online survey privacy agreement, only those individuals and organisations who submitted evidence through our online survey and consented to our privacy agreement there have their names listed below. The list represents the 209 online survey submissions that we analysed after cleaning the data for duplications, blanks, and spam (see Annex A for details). Names are listed as they were given, with personal names removed if an organisation name was available. We provide 207 names here as 2 responses included no name.\u003c/p\u003e\u003cp\u003e4. Further detail on the organisation type and sector of those we received written responses from by email and online survey can be found in the extended method for analysing written responses in Annex A.\u003c/p\u003e\u003ch3 id=\"respondents-to-the-online-consultation-survey\"\u003eRespondents to the online consultation survey\u003c/h3\u003e\u003col\u003e\n \u003cli\u003e\n \u003cp\u003eAdarga Limited\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eADS Group\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAdvai Ltd\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAGENCY: Assuring Citizen Agency in a World with Complex Online Harms\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAgile Property \u0026amp; Homes Limited\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e \u0026amp; Partners\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Centre for Value Based Healthcare\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAidan Freeman\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAIethics.ai\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAlacriter\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAligned \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAlliance for Intellectual Property\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAltered Ltd\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAmendolara Holdings Limited\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAnton\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eArran McCutcheon\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cabbr title=\"Accountable, Responsible and Transparent Artificial Intelligence\"\u003eART-AI\u003c/abbr\u003e, University of Bath\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eArts Council England\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAssociation for Computing Machinery Europe Technology Policy Committee\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAssociation of British HealthTech Industries\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAssociation of Chartered Certified Accountants (\u003cabbr title=\"Association of Chartered Certified Accountants\"\u003eACCA\u003c/abbr\u003e)\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAssociation of Financial Mutuals\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAssociation of Illustrators\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAssociation of Learned and Professional Society Publishers\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAssuring Autonomy International Programme, University of York\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eAvi Semelr\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eBaringa Partners LLP\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eBarnacle Labs\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eBarry O’Brien\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eBen Hopkinson\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eBPI British Phonographic Industry\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eBristows LLP\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eBritish Copyright Council\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eBritish Pest Control Association\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eBritish Security Industry Association\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eBrunel University London Centre for Artificial Intelligence: Social \u0026amp; Digital Innovations\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cabbr title=\"British Standards Institution\"\u003eBSI\u003c/abbr\u003e Group The Netherlands B.V.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eBT Group\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eBud Financial\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eCalvin Karpenko\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eCarlo Attubato\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eCenter for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and Digital Policy Washington, DC. USA\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eCentre for Policy Studies\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eCharlie Bowler\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eChegg, Inc.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eCisco\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eCity, University of London\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eCogstack\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eColin Hayhurst\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eCongenica Ltd\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eCraig Meulen\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eCreators’ Rights Alliance\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eCTRL-Shift \u0026amp; Collider Health\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eCyferd\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eCyLon Ventures\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eDACS (Design and Artists Copyright Society)\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eDaniel Marsden\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eDarrell Warner Limited\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eDeborah W.A. Foulkes\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eDeloitte UK\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eDevelopers Alliance\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eDepartment for Education (\u003cabbr title=\"Department for Education\"\u003eDfE\u003c/abbr\u003e)\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eDirect Line Group\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eDNV\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eDr. Michael K. Cohen\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eEasyJet Airline Company Ltd.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eEd Hagger\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eEKC Group\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eElliott Andrews\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eEmily Gray\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eEmma Ahmed-Rengers\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eEnzai Technologies Limited\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eEquity\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eEviden\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eExperian UK\u0026amp;I\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eFalcon Windsor\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eFlyingBinary\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eForHumanity\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eFreeths LLP\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eFujitsu\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eFull Fact\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eGeeks Ltd.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eGetty Images\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eGlaxoSmithKline plc\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eGlenn Donaldson\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eGlobal Witness\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eGreg Colbourn\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eGreg Mathews\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eGuy Warren\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eHazy\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eHenry\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eHollie\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eHugging Face\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eIain Darby\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eInternational Federation of the Phonographic Industry (\u003cabbr title=\"International Federation of the Phonographic Industry\"\u003eIFPI\u003c/abbr\u003e)\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eINRO London\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eInstitute for the Future of Work\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eInstitute of Chartered Accountants in England and Wales (\u003cabbr title=\"Institute of Chartered Accountants in England and Wales\"\u003eICAEW\u003c/abbr\u003e)\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eInstitute of Innovation and Knowledge Exchange (\u003cabbr title=\"Institute of Innovation and Knowledge Exchange\"\u003eIKE Institute\u003c/abbr\u003e)\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eInstitute of Physics and Engineering in Medicine\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eInstitute of Physics and Engineering in Medicine (Clinical and Scientific Computing group)\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eInstitution of Occupational Safety and Health\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eInternational Federation of Journalists\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eJake Bailey\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eJake Wilkinson\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eJapan Electronics and Information Technology Industries Association\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eJoe Collman\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eJohnny Luk\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eJohnson \u0026amp; Johnson\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eJonas Herold-Zanker\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eJoseph Johnston\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eJudith Barker\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eKainos Software Ltd\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eKelechi Ejikeme\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eKnowledge Associates Cambridge Ltd.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eLabour for the Long Term\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eLegal \u0026amp; General Group PLC\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eLeverhulme Centre for the Future of Intelligence\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eLewis\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cabbr title=\"London School of Economics and Political Science\"\u003eLSE\u003c/abbr\u003e Law, Technology and Society Group\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eLucy Purdon\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eLuke Richards\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eLumi Network\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eMarket Research Society\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eMarta\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eMartin Gore\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eMastercard Europe\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eMedTech Europe\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eMegha Barot\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eMichael Fisher\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eMichael Pascu\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eMicrosoft\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eMind Foundry\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eMukesh Sharma\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eNational Physical Laboratory\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eNational Taxpayers Union Foundation (\u003cabbr title=\"National Taxpayers Union Foundation\"\u003eNTUF\u003c/abbr\u003e)\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eNational Union of Journalists\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eNATS\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eNebuli Ltd.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eNewcastle University\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eNewsstand\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eNicole Hawkesford\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eOffice for Standards in Education, Children’s Services and Skills (\u003cabbr title=\"Office for Standards in Education, Children’s Services and Skills\"\u003eOfsted\u003c/abbr\u003e)\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eOffice for Statistics Regulation\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eOrbit RRI\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003ePaul Dunn\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003ePaul Evans\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003ePaul Ratcliffe\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003ePearson\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003ePhrasee\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003ePippa Robertson\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003ePlanar \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Limited\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003ePolicy Connect\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eProfessional Publishers Association\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eProfessor Julia Black\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003ePRS for Music\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003ePublishers Association\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003ePublishers’ Licensing Services\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003ePupils 2 Parliament\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eQueen Bee Marketing Hive\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eRebecca Palmer\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eRELX\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eReset\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eRohan Vij\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eRoyal Photographic Society of Great Britain\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eSalesforce\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eSambaNova Systems inc\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eSamuel Frewin\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eSAP\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eScale \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eScaleUp Institute\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eScott Timcke\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eSeldon\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eSharon Darcy\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eSimon Kirby\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eSkin Analytics Ltd\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eSouth West Grid for Learning\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eStability \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eSteve Kendall\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003e\u003cabbr title=\"Science and Technology Facilities Council\"\u003eSTFC\u003c/abbr\u003e Hartree Centre\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eSurrey Institute for People-Centred Artificial Intelligence\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eTeal Legal Ltd\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eTemple Garden Chambers\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eThe Copyright Licensing Agency Ltd\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eThe Data Lab Innovation Centre\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eThe Institute of Customer Service\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eThe Multi-Agency Advice Service (\u003cabbr title=\"Multi-Agency Advice Service\"\u003eMAAS\u003c/abbr\u003e) \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and Digital Regulations Service for health and social care.\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eThe Operational Research Society\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eThe Pharmacists’ Defence Association (\u003cabbr title=\"Pharmacists’ Defence Association\"\u003ePDA\u003c/abbr\u003e)\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eThe Physiological Society\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eThe Publishers Association\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eThe Society of Authors\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eThe University of Winchester\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eTom Edward Ashworth\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eTRANSEARCH International\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eTrilateral Research\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eUniversity of Edinburgh\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eUniversity of Edinburgh\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eUniversity of Winchester\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eValentino Giudice\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eValidMind\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eW Legal Ltd\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eWales Safer Communities Network (membership from Police, Fire, Local Authorities, Probation and Third Sector), hosted by WLGA\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eWarwickshire County Council\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eWe and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eWorkday\u003c/p\u003e\n \u003c/li\u003e\n \u003cli\u003e\n \u003cp\u003eWriters’ Guild of Great Britain\u003c/p\u003e\n \u003c/li\u003e\n\u003c/ol\u003e" } }, { "@type": "Question", "name": "Annex C: Individual question summaries", "url": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#annex-c-individual-question-summaries", "acceptedAnswer": { "@type": "Answer", "url": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#annex-c-individual-question-summaries", "text": "\u003ch3 id=\"the-revised-cross-sectoral-ai-principles-1\"\u003eThe revised cross-sectoral \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e principles\u003c/h3\u003e\u003cp\u003e\u003cstrong\u003e1. Do you agree that requiring organisations to make it clear when they are using \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e would improve transparency?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e1. A majority of respondents agreed that requiring organisations to make it clear when they are using \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e would adequately ensure transparency. Respondents who disagreed either felt labelling \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e use would be insufficient or disproportionately burdensome.\u003c/p\u003e\u003cp\u003e2. Respondents who argued the measure would be insufficient often stated that regulators lack the relevant powers, funding, and capabilities to adequately ensure transparency. Linked to this, respondents noted issues around enforcement and access to appeal and redress. Some respondents recommended that the government should consider relevant statutory measures and accountability mechanisms. A few respondents suggested that explanations should be targeted to the context and audience.\u003c/p\u003e\u003cp\u003e3. Other respondents were concerned that a blanket requirement for transparency would create a burdensome barrier for lower risk \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e applications. One respondent noted that the proposal assumes a single actor in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e value chain will have adequate visibility across potentially many life cycle stages and applications. A few respondents wanted to see clear thresholds (including “high-risk applications”) and guidance from the government and regulators on transparency requirements.\u003c/p\u003e\u003cp\u003e4. Respondents were concerned that transparency measures may have potential interactions with existing and forthcoming legislation, such as that for data protection and intellectual property.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e2. Are there other measures we could require of organisations to improve transparency for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e5. There was strong support for a range of transparency measures from respondents. Respondents stressed that transparency was key to building public trust, accountability, and an effective and verifiable regulatory framework.\u003c/p\u003e\u003cp\u003e6. Many respondents endorsed clear reporting obligations on the inputs used to build and train \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. Respondents noted that transparency would be improved through the disclosure of a range of inputs, from data to compute. Echoing responses to question F1 on foundation models, concerns coalesced around whether training data was of sufficient quality, compliant with existing legal frameworks including intellectual property and data protection, and appropriate for downstream uses. A few respondents argued that compute disclosure would improve transparency on the environmental impacts of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.\u003c/p\u003e\u003cp\u003e7. Many respondents also supported the labelling of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e use and outputs, with many recommending the measure to improve user awareness and organisational accountability. Some respondents suggested that labelling \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e generated outputs would help combat \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e generated misinformation and promote intellectual property rights. A few respondents wanted to see clearer opt-ins for uses of data and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, with options for human alternatives.\u003c/p\u003e\u003cp\u003e8. Some respondents endorsed measures that would encourage explanations for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e outcomes and potential impacts. This includes measures for showing users how models produced outputs or answers as well as addressing model limitations and impacts. Similarly, a few respondents noted the importance of organisational and public education through accessible information and targeted awareness raising. A couple of respondents suggested public or organisational registers for (high risk) \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e would help improve awareness.\u003c/p\u003e\u003cp\u003e9. While some respondents advocated for reporting on model details, many emphasised that complex technical information would be best disclosed to regulators and independent verifiers rather than the public. Respondents suggested that organisations share technical model details such as weights, parameters, uses, and testing. Respondents stated that impact and risk assessments, as well as governance and marketing decisions, should be available to either regulators or the public, with a few noting potential compromises with trade secrets. Some respondents endorsed independent assurance techniques, such as third-party audits and technical standards.\u003c/p\u003e\u003cp\u003e10. A few respondents suggested clarifying legal rights and responsibilities for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, with a few of those recommending the introduction of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e legislation and non-compliance measures.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e3. Do you agree that current routes to contest or get redress for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related harms are adequate?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e11. Over half of respondents reported that current routes to contest or seek redress for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related harms through existing legal frameworks are not adequate. In particular, respondents flagged that a lack of transparency around when and how \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is used prevents users from being able to identify \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related harms. Similarly, respondents noted that a lack of transparency around the data used to train \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models complicates data protection and prevents intellectual property rights holders from exercising their legal and moral rights. A few respondents also noted the high costs of individual litigation and advocated for clearer routes for individual and collective action.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e4. How could current routes to contest or seek redress for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related harms be improved, if at all?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e12. Many respondents wanted to see the government clarify legal rights and responsibilities relating to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, though there was no consensus on how to do this. Many respondents suggested clarifying rights and responsibilities in existing law through mechanisms such as regulatory guidance. There was also a broad appetite for centralisation in different forms with some respondents advocating for the creation of a central redress mechanism such as a central \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulator, oversight body, coordination function, or lead regulator. Some respondents wanted to see further statutory requirements, such as licensing.\u003c/p\u003e\u003cp\u003e13. Many respondents stressed the importance of meaningful transparency and some emphasised the need for accessible redress routes. Respondents felt that measures to show users when and how \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is being used would help individuals identify when and how harms had occurred. Respondents wanted to see clear – and in some cases mandatory – routes to contest or seek redress for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related decisions. Respondents noted issues with expensive litigation, particularly in relation to infringement of intellectual property rights. Respondents felt that increasing transparency for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems would make redress more accessible across a broad range of potential harms and, similarly, that clarifying redress routes would improve transparency. Some respondents noted the importance of international agreements to ensure effective routes to contest or seek redress for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related harms across borders. Measures such as moratoriums and mandatory kill switches were only raised by a few respondents.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e5. Do you agree that, when implemented effectively, the revised cross-sectoral principles will cover the risks posed by \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e14. A majority of respondents agreed that the principles would cover the risks posed by \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies when implemented effectively. Respondents that disagreed tended to cite concerns around enforcement and a lack of statutory backing to the principles or wider issues around regulator readiness, including capacity, capabilities, and coordination.\u003c/p\u003e\u003cp\u003e15. Respondents often noted a need for the framework to be adaptable, context-focused, and supported by monitoring and evaluation, citing the fast pace of technological change.\u003c/p\u003e\u003cp\u003e16. A few respondents felt the terms of the question were unclear and asked for further detail on effective implementation.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e6. What, if anything, is missing from the revised principles?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e17. Many respondents advocated for the cross-sectoral \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e principles to more explicitly include human rights and human flourishing, noting that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e should be used to improve human life. Respondents endorsed different human rights and related values including freedom, pluralism, privacy, equality, inclusion, and accessibility.\u003c/p\u003e\u003cp\u003e18. Some respondents wanted further detail on the implementation of the principles. These respondents often asked for more detail on regulator capacity, noting that the “effective implementation” of the principles would require adequate regulator resource, skills, and powers. A couple of respondents asked for more clarity regarding how regulators and organisations are expected to manage trade-offs, such as explainability and accuracy or transparency and privacy.\u003c/p\u003e\u003cp\u003e19. Linked to this, some respondents wanted further guidance on how the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e principles would interact with and be implemented through existing legislation. Respondents mostly raised concerns in regard to data protection and intellectual property law, though a few respondents asked for a more holistic sense of the government approach to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in regard to departmental strategies, such as the Ministry of Defence’s \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e strategy. Some respondents stated that the principles would be ineffective without statutory backing, with a few emphasising the importance of mandating \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related accountability mechanisms.\u003c/p\u003e\u003cp\u003e20. Some respondents advocated for the principles to address a range of issues related to operational resilience. These responses suggested measures for adequate security and cyber security, decommissioning processes, protecting competition, ensuring access, and addressing risks associated with over-reliance. A similar number of respondents wanted to see specific principles on data quality and international alignment.\u003c/p\u003e\u003cp\u003e21. A few respondents recommended the inclusion of principles that would clearly correlate with systemic risks and wider societal impacts, sustainability, or education and literacy. In regard to systemic risks, respondents tended to raise concerns about the potential harms that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies can pose to democracy and the rule of law in terms of disinformation and electoral interference.\u003c/p\u003e\u003ch3 id=\"a-statutory-duty-to-regard-1\"\u003eA statutory duty to regard\u003c/h3\u003e\u003cp\u003e\u003cstrong\u003e7. Do you agree that introducing a statutory duty on regulators to have due regard to the principles would clarify and strengthen regulators’ mandates to implement our principles, while retaining a flexible approach to implementation?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e22. Over half of respondents somewhat or strongly agreed that a statutory duty would clarify and strengthen the mandate of regulators to implement the framework. However, many noted caveats that are detailed in Q8.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e8. Is there an alternative statutory intervention that would be more effective?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e23. Many felt that targeted statutory measures, including expanded regulator powers, would be a more effective statutory intervention. In particular, respondents noted the need for regulators to have appropriate investigatory powers. Some also wanted to see the consequences of breaches more clearly defined. Respondents also suggested specific \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e legislation, a new \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulator, and strict rules about the use of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in certain contexts as more effective statutory interventions. A couple of respondents mentioned that any \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e duties should be on those operating within the market as opposed to on regulators. \u003c/p\u003e\u003cp\u003e24. Some respondents felt the proposed statutory duty is the most effective intervention and should be implemented. However, other respondents couched their support within wider concerns that the framework would not be sufficiently enforceable without some kind of statutory backing. Nearly a quarter of respondents emphasised that regulators would need enhanced resources and capabilities in order to enact a statutory duty effectively. Other respondents felt that the implementation of a duty to regard could disrupt regulation, innovation, and trust if rushed. These respondents recommended that the duty should be reviewed after a period of non-statutory implementation, particularly to observe interactions with existing law and regulatory remits. A few respondents noted that the end goal and timeframes for the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulatory framework were not clear, causing uncertainty.\u003c/p\u003e\u003cp\u003e25. There was some support for the government to mandate measures such as third-party audits, certification, and Environmental, Social and Governance (\u003cabbr title=\"Environmental, Social and Governance\"\u003eESG\u003c/abbr\u003e) style supply chain measures, including reporting on training data. A few respondents were supportive of central monitoring to track regulatory compliance and novel technologies that may require an expansion of regulatory scope.\u003c/p\u003e\u003ch3 id=\"new-central-functions-to-support-the-framework-1\"\u003eNew central functions to support the framework\u003c/h3\u003e\u003cp\u003e\u003cstrong\u003e9. Do you agree that the functions outlined in section 3.3.1 would benefit our \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation framework if delivered centrally?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e26. Nearly all respondents agreed that central delivery of the proposed functions would benefit the framework, with many arguing centralised activities would allow the government to monitor and iterate the framework. Many suggested that feedback from regulators, industry, academia, civil society, and the general public should be used to measure effectiveness, with some calling for regular review points to assess whether the central function remained fit for purpose. A few respondents were concerned that some of the proposed activities may already be carried out by other organisations and suggested mapping existing work to avoid duplication.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e10. What, if anything, is missing from the central functions?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e27. While respondents widely supported the proposed central functions, many wanted to see more detail on the delivery of each activity, with some respondents endorsing a stronger emphasis on engagement and partnerships with existing organisations.\u003c/p\u003e\u003cp\u003e28. Responses highlighted the importance of addressing \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risks and building public trust in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies. Some respondents suggested that the government should prioritise the proposed risk function, noting the importance of identifying and assessing risks related to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. Respondents noted that this risk analysis should include ethical risks, such as bias, and systemic risks to society, such as changes to the labour market. A few respondents emphasised that the education and awareness function would be key to building public trust.\u003c/p\u003e\u003cp\u003e29. Respondents noted the importance of regulatory alignment across sectors and international regimes. Some respondents argued that the central functions should include more on interoperability, noting cyber security, disinformation, and copyright infringement as issues that will require international collaboration.\u003c/p\u003e\u003cp\u003e30. Some respondents suggested that some or all of the central functions should have a statutory underpinning or be delivered by an independent body. Respondents also stressed that, to be effective, the central functions should be adequately resourced and given the necessary technical expertise. This was identified as particularly important to the risk mapping, horizon scanning, and monitoring and evaluation functions.\u003c/p\u003e\u003cp\u003e31. Additional activities or functions suggested by respondents included: statutory powers to ensure the safety and security of highly capable \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models; coordination with the devolved administrations; and oversight of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e compliance with existing laws, including intellectual property and data protection frameworks.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e11. Do you know of any existing organisations who should deliver one or more of our proposed central functions?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e32. Overall, around a quarter of respondents felt that the government should deliver one or more of the central functions. Respondents also highlighted other organisations that could support the central functions, including regulators, technology-focused research institutes and think tanks, private-sector firms, and academic research groups. Many respondents advocated for the regulatory functions to build from the existing strengths of the UK’s regulatory ecosystem. Respondents noted that regulatory coordination initiatives like the Digital Regulation Cooperation Forum (\u003cabbr title=\"Digital Regulation Cooperation Forum\"\u003eDRCF\u003c/abbr\u003e) could help identify and respond to gaps in regulator remits. Respondents also highlighted that think tanks and research institutes such as the Alan Turing Institute, Ada Lovelace Institute, and Institute for the Future of Work have past or existing activities that may complement those described in the proposed central functions.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e12. Are there additional activities that would help businesses confidently innovate and use \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e33. Many respondents felt the central functions could have further activities to support businesses to apply the principles to everyday practices related to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. Respondents argued that the government and regulators should support industry with training programs and educational resources. Respondents noted that this support would be especially important for organisations operating across or between sectors.\u003c/p\u003e\u003cp\u003e34. Respondents felt that regulators should develop and regularly update guidance to allow business to innovate confidently. Respondents reported that incoherent and expensive compliance processes could stifle innovation and slow \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e adoption.\u003c/p\u003e\u003cp\u003e35. Respondents suggested that the government could improve access to high-quality data, ensure international alignment on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e requirements, and facilitate collaboration between regulators, industry, and academia. Some respondents noted that responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation is supported by access to high-quality, diverse, and ethically-sourced data. Respondents suggested that government-sponsored data trusts could help improve access to data. Some respondents saw the government playing a key role in ensuring the international harmonisation of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation, noting that interoperability would promote trade and competition. A few respondents suggested that the government could facilitate collaboration between regulators, industry, and academia to ensure alignment between \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation, innovation, and research. A small number of respondents suggested introducing \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e legislation rather than central functions to provide greater legal certainty.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e12.1. If so, should these activities be delivered by government, regulators, or a different organisation?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e36. While respondents identified some activities to support businesses to confidently innovate and use \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies that should be led by regulators, a majority of respondents suggested that these activities should be delivered by the government.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e13. Are there additional activities that would help individuals and consumers confidently use \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e37. Respondents prioritised transparency from the cross-sectoral principles, with nearly half arguing that individuals and consumers should be able to identify when and how \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is being used by a service or organisation.\u003c/p\u003e\u003cp\u003e38. Many respondents felt that education and training would build public trust in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies and help accelerate adoption. Respondents emphasised that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e literacy should be improved through education and training that enables consumers to use \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e products and services more effectively. Respondents suggested training should cover all stages of the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e life cycle and build understanding of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e benefits as well as \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks. Respondents stated that, along with the government and regulators, education, consumer, and advocacy organisations should help make knowledge accessible.\u003c/p\u003e\u003cp\u003e39. Some respondents wanted to see clearer routes for consumers to contest or seek redress for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related harms. Some emphasised the importance of adequate data protection measures. A few respondents noted that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e specific legislation would provide legal certainty and help foster public trust.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e13.1. If so, should these activities be delivered by the government, regulators, or a different organisation?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e40. While most respondents recommended that the government, regulators, industry, and civil society work together to help individuals and consumers confidently use \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies, nearly half of respondents suggested that activities to improve consumer confidence in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e should be delivered by the government.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e14. How can we avoid overlapping, duplicative, or contradictory guidance on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e issued by different regulators?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e41. Many respondents suggested the proposed central functions would be the most effective mechanism to avoid overlapping, duplicative, or contradictory guidance. Respondents noted that the central functions would support regulators by identifying cross-sectoral risks, facilitating consistent risk management actions, providing guidance on cross-sectoral issues, and monitoring and evaluating the framework as a whole.\u003c/p\u003e\u003cp\u003e42. While respondents stressed that consistent implementation of the framework across remits would require regulatory coordination, there was no agreement on the best way to achieve this. Some suggested establishing a new \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulator, a few proposed appointing an existing regulator as the ‘lead regulator’, and others endorsed voluntary regulatory coordination measures, emphasising the role of regulatory fora such as the Digital Regulation Cooperation Forum (\u003cabbr title=\"Digital Regulation Cooperation Forum\"\u003eDRCF\u003c/abbr\u003e).\u003c/p\u003e\u003cp\u003e43. Some respondents suggested that horizontal cross-sector standards and assurance techniques would encourage consistency across regulatory remits, sectors, and international jurisdictions. Respondents recommended clarifying the specific remits of each regulator in relation to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e to promote coherence across the regulatory landscape. A few argued that introducing \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e legislation, including putting the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e principles and regulatory coordination into statute, would prevent regulatory divergence.\u003c/p\u003e\u003ch3 id=\"monitoring-and-evaluation-of-the-framework-1\"\u003eMonitoring and evaluation of the framework\u003c/h3\u003e\u003cp\u003e\u003cstrong\u003e15. Do you agree with our overall approach to monitoring and evaluation?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e44. Over half of respondents agreed with the overall approach to monitoring and evaluation set out in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper. Many commended the proposals for a feedback loop and advised that industry, regulators, and civil society should be engaged to help measure the effectiveness of the framework. Respondents broadly supported an iterative approach and some suggested consulting industry as part of a regular evaluation to assess and adapt the framework. A few respondents advocated for findings from framework evaluations to be publicly available.\u003c/p\u003e\u003cp\u003e45. Some respondents stated that there was not enough detail or that the approach to monitoring and evaluation was unclear. To determine the practicality of the approach, respondents requested more information about the format, frequency, and sources of data that will be developed and used. Some of these respondents stressed the importance of identifying issues with the framework in a timely way. Respondents emphasised that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks will need to be continuously monitored, noting that more clarity and transparency is needed on how risks will be escalated and addressed.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e16. What is the best way to measure the impact of our framework?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e46. Many respondents suggested a data driven approach to measuring the impact of the framework would be most effective. Respondents recommended qualitative and quantitative data collection, impact assessments, and key performance indicators (\u003cabbr title=\"key performance indicators\"\u003eKPIs\u003c/abbr\u003e). Examples of possible \u003cabbr title=\"key performance indicators\"\u003eKPIs\u003c/abbr\u003e included consumer trust and satisfaction, rate of innovation, time to market, complaints and adverse events, litigation, and compliance costs. A few respondents suggested using economic growth to measure the impact of the framework. A couple wanted to see measurements tailored to specific sectors and suggested that the government engage with regulators to understand how they measure regulatory impacts on their respective industries.\u003c/p\u003e\u003cp\u003e47. Just over a quarter of respondents recommended that the government maintain a close dialogue with industry, civil society, and international partners. Respondents repeatedly stressed the importance of gathering a holistic view on impact with many noting that the government should engage with stakeholders who can offer different perspectives on the framework’s efficacy, including start-ups and small businesses. Respondents felt that broad consultation to gather evidence on public attitudes towards the framework and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e more generally would also be useful.\u003c/p\u003e\u003cp\u003e48. Respondents suggested that international interoperability should be monitored to ensure that the framework allows businesses to trade with and develop products for international markets. Some respondents suggested referencing established indicators and frameworks, such as the United Nations Sustainable Development Goals and the Five Capitals, to inform a set of qualitative and quantitative measures.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e17. Do you agree that our approach strikes the right balance between supporting \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation; addressing known, prioritised risks; and future-proofing the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation framework?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e49. Half of respondents agreed that the approach strikes the right balance between supporting \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation; addressing known, prioritised risks; and future-proofing the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation framework. However, some respondents were concerned that the approach would not be able to keep pace with the technological development of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, stating that adequate future proofing of the framework will depend on retaining flexibility and adaptability when implementing the principles. Respondents wanted greater clarity on the specific areas to be regulated and stressed that regulators need to be proactive in identifying the risk of harm.\u003c/p\u003e\u003cp\u003e50. Over a third of respondents disagreed. Respondents were concerned that the framework does not clearly allocate responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e outcomes. Some thought that the focus on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation, economic growth, and job creation would prevent a sufficient focus on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risks, such as bias and discrimination.\u003c/p\u003e\u003ch3 id=\"regulator-capabilities-1\"\u003eRegulator capabilities\u003c/h3\u003e\u003cp\u003e\u003cstrong\u003e18. Do you agree that regulators are best placed to apply the principles and the government is best placed to provide oversight and deliver central functions?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e51. Nearly all respondents agreed that regulators are best placed to implement the principles and that the government is best placed to provide oversight and deliver the central functions.\u003c/p\u003e\u003cp\u003e52. While respondents noted that regulators’ domain-specific expertise would be key to the effective tailoring of the cross-sectoral principles to sector needs, some also suggested that the government should support regulators to manage \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks within their remits by building their technical \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e skills and expertise.\u003c/p\u003e\u003cp\u003e53. Some respondents argued that the government would need to work closely with regulators to provide effective oversight of the framework and delivery of the central functions. Some also endorsed further collaboration between regulators. A few felt that the government’s oversight of the framework should be open and transparent, advocating for input from industry and civil society.\u003c/p\u003e\u003cp\u003e54. Some respondents were concerned that no current bodies were best placed to support the implementation and oversight of the proposed framework, with a few asking for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e legislation and a new \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulator.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e19. As a regulator, what support would you need in order to apply the principles in a proportionate and pro-innovation way?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e55. While regulators that responded to this question supported the proposed framework, just over a quarter argued that the key challenge to proportionate and pro-innovation implementation would be coordination. Regulators saw value in sharing best practices to aid consistency and build existing knowledge into sector-specific approaches. Many suggested that strong mechanisms to share information between regulators and the proposed central functions would help avoid duplicate requirements across multiple regulators.\u003c/p\u003e\u003cp\u003e56. Regulators that responded to this question reported inconsistent \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e capabilities, with over a quarter asking for further support in technical expertise and others demonstrating advanced approaches to addressing \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e within their remits. Regulators identified common capability gaps including a lack of technical \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e knowledge and limited understanding of where and how \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is used by those they regulate. Some suggested that government support in building internal organisational capacity would help them to effectively apply the principles within their existing remits, with some noting that they struggle to compete with the private sector to recruit the right technical expertise and skills. A couple of regulators highlighted how initiatives such as the government-funded Regulators’ Pioneer Fund have already allowed them to develop approaches to responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation in their remits. Two regulators reported that the scope of their existing statutory remits and powers in relation to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is unclear. These regulators asked for further details on how the central function would ensure that regulators used their powers and remits in a coherent way as they apply the principles.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003e20. Do you agree that a pooled team of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e experts would be the most effective way to address capability gaps and help regulators apply the principles?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e57. Over three quarters of respondents agreed that a pooled team of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e experts would be the most effective way to build common capability and address gaps. Respondents felt that a team of pooled \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e experts could help regulators to understand \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and address its unique characteristics within their sectors, supporting the consistent application of the principles across remits.\u003c/p\u003e\u003cp\u003e58. While respondents supported increasing regulators’ access to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e expertise, many stressed that a pooled team would need to contain diverse and multi-disciplinary perspectives. Respondents felt the pooled team should bring together technical \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e expertise with sector-specific knowledge, industry specialists, and civil society to ensure that regulators are considering a broad range of views in their application of the principles.\u003c/p\u003e\u003cp\u003e59. Some respondents stated that a pool of experts would be insufficient and suggested that in-house regulator capability with sector-specific expertise should be prioritised.\u003c/p\u003e\u003ch3 id=\"tools-for-trustworthy-ai-1\"\u003eTools for trustworthy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e\n\u003c/h3\u003e\u003cp\u003e\u003cstrong\u003e21. Which non-regulatory tools for trustworthy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e would most help organisations to embed the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation principles into existing business processes?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e60. There was strong support for the use of technical standards and assurance techniques, with respondents agreeing that both would help organisations to embed the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e principles into existing business processes. Many respondents praised the UK \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Standards Hub and the Centre for Data Ethics and Innovation’s (\u003cabbr title=\"Centre for Data Ethics and Innovation\"\u003eCDEI\u003c/abbr\u003e) work on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e assurance. While some respondents noted that businesses would have a smaller compliance burden if tools and processes were consistent across sectors, others noted the importance of additional sector-specific tools and processes. Respondents also suggested supplementing technical standards with case studies and examples of good practice.\u003c/p\u003e\u003cp\u003e61. Respondents argued that standardised tools and techniques for identifying and mitigating potential risks related to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e would also support organisations to embed the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e principles. Some identified assurance techniques such as impact and risk assessments, model performance monitoring, model uncertainty evaluations, and red teaming as particularly helpful for identifying \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks. A few respondents recommended assurance techniques that can be used to detect and prevent issues such as drift to mitigate risks related to data. While commending the role of tools for trustworthy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, a few respondents also expressed a desire for more stringent regulatory measures, such as statutory requirements for high risk applications of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e or a watchdog for foundation models.\u003c/p\u003e\u003cp\u003e62. Respondents felt that tools and techniques such as fairness metrics, transparency reports, and organisational \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e ethics guidelines can support the responsible use of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e while growing public trust in the technology. Respondents expressed the desire for third-party verification of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models through bias audits, consumer labelling schemes, and external certification against technical standards.\u003c/p\u003e\u003cp\u003e63. A few respondents noted the benefits of international harmonisation across \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e governance approaches for both organisations and consumers. Some endorsed interoperable technical standards for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, commending international standards development organisations (\u003cabbr title=\"Standards Development Organisations\"\u003eSDOs\u003c/abbr\u003e) such as the International Organisation for Standardisation (\u003cabbr title=\"International Organization for Standardization\"\u003eISO\u003c/abbr\u003e) and Institute of Electrical and Electronics Engineers (\u003cabbr title=\"Institute of Electrical and Electronics Engineers\"\u003eIEEE\u003c/abbr\u003e). Others noted the strength of a range of international work on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e including that by individual countries, such as the USA’s National Institute of Standards and Technology (\u003cabbr title=\"National Institute of Standards and Technology\"\u003eNIST\u003c/abbr\u003e) \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Risk Management Framework (\u003cabbr title=\"Risk Management Framework\"\u003eRMF\u003c/abbr\u003e) and Singapore’s \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Verify Foundation, along with work on international governance by multilateral bodies such as the Organisation for Economic Co-operation and Development (\u003cabbr title=\"Organisation for Economic Co-operation and Development\"\u003eOECD\u003c/abbr\u003e), United Nation (\u003cabbr title=\"United Nations\"\u003eUN\u003c/abbr\u003e), and \u003cabbr title=\"Group of Seven\"\u003eG7\u003c/abbr\u003e.\u003c/p\u003e\u003ch3 id=\"final-thoughts-1\"\u003eFinal thoughts\u003c/h3\u003e\u003cp\u003e\u003cstrong\u003e22. Do you have any other thoughts on our overall approach? Please include any missed opportunities, flaws, and gaps in our framework.\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e64. Some respondents felt that the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation framework set out in the white paper would benefit from more detailed guidance on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risks. Some wanted to see more stringent measures for severe risks, particularly related to the use of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e in safety-critical contexts. Respondents suggested that the framework would be clearer if the government provided risk categories for certain uses of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e such as law enforcement and places of work. Other respondents stressed that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e can pose or accelerate significant risks related to privacy and data protection breaches, cyberattacks, electoral interference, misinformation, human rights infringements, environmental sustainability, and competition issues. A few respondents were concerned about the potential existential risk posed by \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. Many respondents felt that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies are developing faster than regulatory processes.\u003c/p\u003e\u003cp\u003e65. Respondents argued that the success of the framework relies on sufficient coordination between regulators in order to provide a clear and consistent approach to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e across sectors and markets. Respondents also noted that different sectors face particular \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related benefits and risks, suggesting that the framework would need to balance the consistency provided by cross-sector requirements with the accuracy of sector-specific approaches. In particular, respondents flagged that any new rules or bodies to regulate \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e should build from the existing statutory remits of regulators and relevant regulatory standards. Respondents also noted that regulators would need to be adequately resourced with technical expertise and skills to implement the framework effectively.\u003c/p\u003e\u003cp\u003e66. Respondents consistently emphasised that effective \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation relies on international harmonisation. Respondents suggested that the UK should work towards an internationally aligned regulatory ecosystem for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e by developing a gold standard framework and promoting best practice through key multilateral channels such as the \u003cabbr title=\"Organisation for Economic Co-operation and Development\"\u003eOECD\u003c/abbr\u003e, \u003cabbr title=\"United Nations\"\u003eUN\u003c/abbr\u003e, \u003cabbr title=\"Group of Seven\"\u003eG7\u003c/abbr\u003e, and \u003cabbr title=\"Group of 20\"\u003eG20\u003c/abbr\u003e. Respondents noted that divergent or overlapping approaches to regulating \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e would cause significant compliance burdens. Respondents argued that international cooperation can support responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation in the UK by creating clear and certain rules that allow investments to move across multiple markets. Respondents also suggested establishing bilateral working groups with key strategic partners to share expertise. Some respondents stressed that the UK’s pro-innovation approach should be delivered at pace to remain competitive with a fast moving international landscape.\u003c/p\u003e\u003ch3 id=\"legal-responsibility-for-ai-1\"\u003eLegal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e\n\u003c/h3\u003e\u003cp\u003e\u003cstrong\u003eL1. What challenges might arise when regulators apply the principles across different \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e applications and systems? How could we address these challenges through our proposed \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulatory framework?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e67. Respondents felt that there were two core challenges for regulators applying the principles across different \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e applications and systems: a lack of clear legal responsibility across complicated \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e life cycles and issues with coordination across regulators and sectors.\u003c/p\u003e\u003cp\u003e68. Over a quarter of respondents felt it was not clear who would be held liable for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risks. Some respondents raised a further concern about confusing interactions between the framework and existing legislation.\u003c/p\u003e\u003cp\u003e69. While nearly half of respondents were concerned about coordination and consistency across sectors and regulatory remits, some indicated that a solution (and the strength of the framework) lay in a context-based approach. Respondents asked for sector-based guidance from regulators, compliance tools, and regulator engagement with industry.\u003c/p\u003e\u003cp\u003e70. Many respondents suggested introducing statutory requirements or centralising the framework within a single organisational body, but there was no consensus over whether this centralisation should take the form of a lead regulator, central regulator, or coordination function. Some respondents suggested mandating industry transparency or third-party audits.\u003c/p\u003e\u003cp\u003e71. Respondents also raised a lack of international standards and agreements as a challenge, pointing to the importance of international alignment and collaboration.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eL2.i. Do you agree that the implementation of our principles through existing legal frameworks will fairly and effectively allocate legal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e across the life cycle?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e72. While some respondents somewhat agreed that the principles would allocate legal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e fairly and effectively through existing legal frameworks, most respondents either disagreed or neither agreed nor disagreed. Many respondents stated that it is not clear how the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation principles would be implemented through existing legal frameworks. Respondents voiced concerns about gaps in existing legislation including intellectual property, legal services, and employment law. Some respondents stated that intellectual property rights needed to be affirmed and clarified to improve legal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. A few respondents noted the need for the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e framework to monitor and adapt as the technology advances and becomes more widely used. One respondent noted that the burden of liability falls at the deployer level and suggested that it would be essential to address information gaps in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e life cycle to improve the allocation of legal responsibility.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eL.2.ii. How could it be improved, if at all?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e73. Many respondents felt that the framework needed to further clarify liability across the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e life cycle. In particular, respondents repeatedly noted the need for a legally responsible person for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and some suggested a model similar to Data Protection Officers.\u003c/p\u003e\u003cp\u003e74. Over a quarter of respondents stated that new \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e legislation or regulator powers would be necessary to effectively allocate liability across the life cycle. Some named specific measures that would need statutory underpinning, with a few advocating for licensing and pre-approvals and a couple suggesting a moratorium on the most advanced \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.\u003c/p\u003e\u003cp\u003e75. Others felt that it would be best to clarify legal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e according to existing frameworks. Respondents wanted clarity on how the principles would be applied with or through existing law, with some suggesting that regulatory guidance would provide greater certainty.\u003c/p\u003e\u003cp\u003e76. Respondents also suggested that non-statutory measures such as enhancing technical regulator capability, domestic and international standards, and assurance techniques would help fairly and effectively allocate legal responsibility across the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e life cycle.\u003c/p\u003e\u003cp\u003e77. Others noted that the proposed central functions, including risk assessment, horizon scanning, and monitoring and evaluation, would be key to ensuring that legal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e was fairly and effectively distributed across the life cycle as \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e capabilities advance and become increasingly used.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eL3. If you are a business that develops, uses, or sells \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, how do you currently manage \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risk including through the wider supply chain? How could government support effective \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risk management?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e78. Nearly half of respondents to this question told us that they had implemented risk assessment processes for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e within their organisation. Many used existing best practice processes and guidance from their sector or trade bodies such as techUK. Some felt that the proliferation of different organisational risk assessment processes reflected the absence of overarching guidance and best practice from the government. Of these respondents, many suggested that it would be useful for businesses to understand the government’s view on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related best practices, with some recommending a central guide on using \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e safely.\u003c/p\u003e\u003cp\u003e79. Many respondents noted their compliance with existing legal frameworks that capture \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risks, such as product safety and personal data protections. Respondents highlighted that any future \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e measures should avoid duplicating or contradicting existing rules and laws.\u003c/p\u003e\u003cp\u003e80. Respondents consistently stressed the importance of transparency, with some highlighting information sharing tools like model cards. Similarly to Q2, some respondents suggested that labelling \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e use would be beneficial to users, particularly in regard to building literacy around potentially malicious \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e generated content, such as deepfakes and disinformation. A few respondents argued that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e labelling can help shape expectations of a service and should be a consumer protection. Echoing answers to F1, respondents also mentioned that services should be transparent about the data used to train \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models so users can understand how tools and services work as well as their limitations.\u003c/p\u003e\u003cp\u003e81. Responses showed that the size of an organisation shaped the capacity to assess \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risks. While larger organisations mentioned that they engage with customers and suppliers to shape and share best practices, some smaller businesses asked for further support to assess \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related risk and implement the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e principles effectively.\u003c/p\u003e\u003ch3 id=\"foundation-models-and-the-regulatory-framework-1\"\u003eFoundation models and the regulatory framework\u003c/h3\u003e\u003cp\u003e\u003cstrong\u003eF1. What specific challenges will foundation models such as large language models (\u003cabbr title=\"large language models\"\u003eLLMs\u003c/abbr\u003e) or open-source models pose for regulators trying to determine legal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e outcomes?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e82. While respondents supported the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation framework set out in the white paper, many were concerned that foundation models may warrant a bespoke regulatory approach. In particular, respondents noted that foundation models are characterised by their technical complexity and stressed their potential to underpin many different applications across multiple sectors. Nearly a quarter of respondents emphasised that foundation models make it difficult to determine legal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e outcomes, with some sharing hypothetical use-cases where both upstream and downstream actors are at fault. Respondents stressed that technical opacity, complex supply chains, and information asymmetries prevent sufficient explainability, accountability, and risk assessment for foundation models.\u003c/p\u003e\u003cp\u003e83. Many respondents were concerned about the quality of the data used to train foundation models and whether training data is appropriate for all downstream model applications. Respondents stated that it was not clear whether data used to train foundation models complies with existing laws, such as those for data protection and intellectual property. Respondents noted that definitions and standards for training data were lacking. Respondents felt that data use could be improved through better information sharing measures, benchmark measurements and standards, and the clear allocation of responsibility to a specific actor or person for whether or not data is appropriate to a given application.\u003c/p\u003e\u003cp\u003e84. Some respondents emphasised the complexity of foundation model supply chains and argued that information asymmetries between upstream developers (with technical oversight) and downstream deployers (with application oversight) not only muddies legal responsibility for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e outcomes but also prevents sufficient risk monitoring and mitigation. While some respondents noted the concentrated market power of foundation model developers and suggested these actors were best positioned to mitigate related risks, others argued that developers would have limited sight of the risks linked to specific downstream applications. Many raised concerns about the lack of measures to rigorously judge the appropriateness of a foundation model to a given application.\u003c/p\u003e\u003cp\u003e85. A few respondents noted concerns regarding wider access to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, including open source, leaking, or malicious use. However, a similar number of respondents noted the importance of open source to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovation, transparency, and trust.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eF2. Do you agree that measuring compute provides a potential tool that could be considered as part of the governance of foundation models?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e86. Half of respondents felt compute was an inadequate proxy for governance requirements, with many arguing that the fast pace of technological change would mean compute-related thresholds would be quickly outdated. However, nearly half somewhat agreed that measuring compute would be useful for foundation model governance, suggesting that it could be used to assess whether a particular \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e model should follow certain requirements when used with other governance measures. A few respondents noted that measuring compute would be one way to capture the environmental impact of different \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e models.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eF3. Are there other approaches to governing foundation models that would be more effective?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e87. There was wide support for governance measures and tools for trustworthy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, with respondents advocating for the use of organisational governance, technical standards, and assurance techniques dedicated to foundation models.\u003c/p\u003e\u003cp\u003e88. Some respondents recommended assessing foundation model capabilities and applications rather than compute. Respondents felt that model verification measures, such as audits and evaluations, would be effective, with some suggesting these should be mandatory requirements. Some respondents noted the importance of downstream monitoring or post-market surveillance. One respondent suggested a pre-deployment sandbox.\u003c/p\u003e\u003cp\u003e89. A small number of respondents wanted to see statutory requirements on foundation models. A few endorsed moratoriums, bans, or limits on foundation models and uses. Others suggested using contracts, licences, and user agreements, with respondents also noting the importance of both physical and cyber security measures.\u003c/p\u003e\u003ch3 id=\"ai-sandboxes-and-testbeds-1\"\u003e\n\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e sandboxes and testbeds\u003c/h3\u003e\u003cp\u003e\u003cstrong\u003eS1. Which of the sandbox models described in section 3.3.4 would be most likely to support innovation?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e90. While a large majority of respondents were strongly supportive of sandboxes in general, the “multiple sector, multiple regulator” (\u003cabbr title=\"Multiple Sector, Multiple Regulator\"\u003eMSMR\u003c/abbr\u003e) and “single sector, multiple regulator” (\u003cabbr title=\"Single Sector, Multiple Regulator\"\u003eSSMR\u003c/abbr\u003e) models were seen to most likely support innovation.\u003c/p\u003e\u003cp\u003e91. Over a third of respondents felt the \u003cabbr title=\"Multiple Sector, Multiple Regulator\"\u003eMSMR\u003c/abbr\u003e model would support innovation, noting that the cross-sectoral basis would enable regulators to develop effective guidance in response to live issues, harmonise rules, coordinate implementation, ensure applicability to safety critical sectors, and identify complementary policy levers. Respondents suggested that a \u003cabbr title=\"Multiple Sector, Multiple Regulator\"\u003eMSMR\u003c/abbr\u003e sandbox should tackle issues related to the implementation of the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e principles, including identifying and addressing any gaps in the framework, overlap with existing regulation, coordination challenges between sectors and regulators, and any blockers to effective implementation of the regulatory framework, such as regulator capacity. Respondents also stressed that the sandbox should be flexible and adaptable in order to future proof against new technological developments.\u003c/p\u003e\u003cp\u003e92. An equal number of respondents endorsed the \u003cabbr title=\"Single Sector, Multiple Regulator\"\u003eSSMR\u003c/abbr\u003e model. Respondents noted that the \u003cabbr title=\"Single Sector, Multiple Regulator\"\u003eSSMR\u003c/abbr\u003e and “multiple sector, single regulator” (MSSR) models would be easier to launch due to their more streamlined coordination across a single sector or regulator. For this reason, respondents felt that these models might drive the most immediate value. Some suggested that an initial single sector or single regulator sandbox could be adapted into a \u003cabbr title=\"Multiple Sector, Multiple Regulator\"\u003eMSMR\u003c/abbr\u003e model as work progressed in order to capture the benefits of both models.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eS2. What could the government do to maximise the benefit of sandboxes to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovators?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e93. Some respondents argued that the sandbox should be developed and delivered in collaboration with businesses, regulators, consumer groups, and academics and other experts. Respondents suggested building on the existing strengths of the UK regulatory landscape, such as facilitating cross-sector learnings through the Digital Regulation Cooperation Forum (\u003cabbr title=\"Digital Regulation Cooperation Forum\"\u003eDRCF\u003c/abbr\u003e).\u003c/p\u003e\u003cp\u003e94. Respondents stated that the sandbox should develop guidance, share information and tools, and provide support to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovators. In particular, respondents said that information about opportunities for involvement should be shared and noted that sharing outcomes would encourage wider participation. Respondents wanted the sandbox to be open and transparent, with many advocating for sandbox processes, regulatory assessments and reports, decision processes, evidence reviews, and subsequent findings to be made available to the public. Respondents suggested that regular reports and guidance from the sandbox would inform innovators and future regulation by creating “business-as-usual” processes. Respondents felt that measures should be taken to make the sandbox as accessible as possible, with a few advocating for dedicated pathways and training for smaller businesses.\u003c/p\u003e\u003cp\u003e95. Respondents felt that the sandbox should be used to inform and develop technical standards and assurance techniques that can be widely used. A few mentioned that this would help promote best practice across industry. Others noted that, to be most beneficial, the sandbox should be well aligned with wider regulation for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. Respondents also noted that a sandbox presents an opportunity for the UK to demonstrate global leadership in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation and technical standards by sharing findings and best practices internationally.\u003c/p\u003e\u003cp\u003e96. Respondents noted that the sandbox could support innovation by providing market advantages, such as product certification, to maximise the benefits to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e innovators. Other financial incentives suggested by respondents included innovation grants, tax credits, and free or funded participation in supervised test environment sandboxes. A few stakeholders agreed that funding would help start-ups and smaller businesses with less organisational resources to participate in research and development focused sandboxes. Respondents suggested that the sandbox could collaborate with UK and international investment companies to build opportunities for participating companies.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eS3. What could the government do to facilitate participation in an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulatory sandbox?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e97. Some respondents suggested that grants, subsidies, and tax credits would encourage participation by smaller businesses and start-ups in resource-intensive, research and development focused sandbox models such as supervised test environments.\u003c/p\u003e\u003cp\u003e98. Respondents endorsed a range of incentives to facilitate participation in different sandbox models including access to standardised and anonymised datasets, and accreditation schemes that would show alignment with regulatory requirements and help gain market access. There was some support for innovation competitions that would help select participants.\u003c/p\u003e\u003cp\u003e99. Similarly to S2, respondents agreed that collaboration and consultation with a range of stakeholders would help facilitate broad participation. Respondents suggested research centres, accelerator programmes, and university partnerships. There was support for a diverse group of stakeholders to be involved in the early stages of sandbox development, especially to identify regulatory areas with high risk. There was some support for harmonised evaluation frameworks across sectors to reduce regulatory burden and encourage wider interest from prospective stakeholders. One respondent proposed a dedicated online platform that would provide access to relevant guidance and provide a portal for submitting and tracking applications along with a community forum.\u003c/p\u003e\u003cp\u003e100. There was broad support for a simple application process with clear guidelines, templates, and information on eligibility and legal requirements. Respondents expressed support for clear entry and exit criteria, noting the importance of reducing the administrative burden on smaller businesses and start-ups to lower the barrier to entry.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eS4. Which industry sectors or classes of product would most benefit from an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e sandbox?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003e101. While there was no overall consensus on a specific sector or class of product that would most benefit from an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e sandbox, respondents identified two “safety-critical” sectors with a high-degree of potential risk: healthcare and transport. Respondents noted that these sectors are characterised by an inability for real-world testing and would benefit from an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e sandbox. Respondents noted the potential to enhance healthcare outcomes, patient safety, and compliance with patient privacy guidelines by fostering innovation in areas such as diagnostic tools, personalised medicine, drug discovery, and medical devices. Other respondents noted the rise of autonomous vehicles and intelligent transportation systems along with significant enthusiasm from industry to test the regulatory framework.\u003c/p\u003e\u003cp\u003e102. Some respondents suggested that financial services and insurance would benefit from an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e sandbox due to heavy investment from the sector in automation and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. Respondents also noted that financial services and insurance are also overseen by multiple regulators, including the Information Commissioner’s Office (\u003cabbr title=\"Information Commissioner's Office\"\u003eICO\u003c/abbr\u003e), Prudential Regulation Authority (\u003cabbr title=\"Prudential Regulation Authority\"\u003ePRA\u003c/abbr\u003e), Financial Conduct Authority (\u003cabbr title=\"Financial Conduct Authority\"\u003eFCA\u003c/abbr\u003e), and The Pensions Regulator (\u003cabbr title=\"The Pensions Regulator\"\u003eTPR\u003c/abbr\u003e). Respondents noted that financial services could leverage an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e sandbox to explore \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-based applications for risk assessment, fraud detection, algorithmic trading, and customer service.\u003c/p\u003e\u003cp\u003e103. It was noted by one respondent that the nuclear sector is currently already benefiting from an \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e sandbox. The Office for Nuclear Regulation (\u003cabbr title=\"Office for Nuclear Regulation\"\u003eONR\u003c/abbr\u003e) and the Environment Agency (\u003cabbr title=\"Environment Agency\"\u003eEA\u003c/abbr\u003e) have taken the learnings from their own regulatory sandbox to develop the concept of an international \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e sandbox for the nuclear sector.\u003c/p\u003e" } }, { "@type": "Question", "name": "Annex D: Summary of impact assessment evidence", "url": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#annex-d-summary-of-impact-assessment-evidence", "acceptedAnswer": { "@type": "Answer", "url": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response#annex-d-summary-of-impact-assessment-evidence", "text": "\u003cp\u003eThis annex provides a summary of the written evidence we received in response to our consultation on the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation impact assessment\u003csup id=\"fnref:124\" role=\"doc-noteref\"\u003e\u003ca href=\"#fn:124\" class=\"govuk-link\" rel=\"footnote\"\u003e[footnote 124]\u003c/a\u003e\u003c/sup\u003e. We asked eight questions including seven open or semi-open questions that received a range of written reflections. We asked:\u003c/p\u003e\u003col\u003e\n \u003cli\u003eDo you agree that the rationale for intervention comprehensively covers and evidences current and future harms?\u003c/li\u003e\n \u003cli\u003eDo you agree that increased trust is a significant driver of demand for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems?\u003c/li\u003e\n \u003cli\u003eDo you have any additional evidence to support the following estimates and assumptions across the framework?\u003c/li\u003e\n \u003cli\u003eDo you agree with the estimates associated with the central functions?\u003c/li\u003e\n \u003cli\u003eAre you aware of any alternative metrics to measure the policy objectives?\u003c/li\u003e\n \u003cli\u003eDo you believe that some \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems would be prohibited in Options 1 and 2, due to increased regulatory scrutiny?\u003c/li\u003e\n \u003cli\u003eDo you agree with our assessment of each policy option against the objectives?\u003c/li\u003e\n \u003cli\u003eDo you have any additional evidence that proves or disproves our analysis in the impact assessment?\u003c/li\u003e\n\u003c/ol\u003e\u003cp\u003eIn total we received 64 written responses on the impact assessment consultation from organisations and individuals. The method of our analysis is captured in Annex A and a summary of responses to these questions follows below. \u003c/p\u003e\u003cp\u003e\u003cstrong\u003eQuestion 1: Do you agree that the rationale for intervention comprehensively covers and evidences current and future harms?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003eSummary of responses:\u003c/p\u003e\u003cp\u003eMore than half of respondents disagreed that the rationale for intervention comprehensively covers evidence of current and future harms. Nearly half of respondents stated that not all risks are adequately addressed. Many of these respondents argued that the rationale does not account for unexpected harms or existential and systemic risks. One respondent argued that the rationale does not consider the impact of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e on human rights. Another respondent suggested that there should be mandatory requirements for the ethical collection of data and another advocated for pre-deployment measures to mitigate \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks.\u003c/p\u003e\u003cp\u003eOver a quarter of respondents suggested analysing risks and opportunities for each sector. These respondents often argued that the potential harms and benefits in different industries are not accounted for, such as the impact of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e on jobs.\u003c/p\u003e\u003cp\u003eSome respondents advocated for the government to build the evidence on current and future harms as well as potential interventions. Many of these respondents emphasised the importance of including diverse perspectives and the public voice when conducting research and regulating \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.\u003c/p\u003e\u003cp\u003eA few respondents noted that the government and regulators should adopt a flexible approach that monitors and can adapt to technological developments.\u003c/p\u003e\u003cp\u003eA few respondents stated that excessive regulation and government intervention will stifle innovation instead of encouraging it. These respondents argued that there needs to be a balance between mitigating risks and enabling the benefits of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.\u003c/p\u003e\u003cp\u003eOne respondent stated that there should be an independent regulator for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eQuestion 2: Do you agree that increased trust is a significant driver of demand for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003eSummary of responses:\u003c/p\u003e\u003cp\u003eOver half of respondents agreed that trust is a significant driver of demand for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems. However, around a quarter disagreed and some remained unsure.\u003c/p\u003e\u003cp\u003eOver a third of respondents gave a written answer that could provide further insight outside of agreeing or disagreeing. Of these, many respondents stressed that transparency, education, and governance measures (such as regulation and technical standards) increase trust. These ideas were reflected in both respondents who agreed and disagreed in trust driving demand for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e.\u003c/p\u003e\u003cp\u003eRespondents also argued that trust in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e could be reduced by concerns about bias or safety. Some of these respondents highlighted that unfair or untransparent bias in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems not only reduces trust but impacts already marginalised communities the most. Some respondents argued that prioritising innovation over trust in a regulatory approach would reduce trust.\u003c/p\u003e\u003cp\u003eOf the respondents that disagreed that trust was a driver of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e uptake and provided further written responses, two main themes emerged. First, that demand for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e is driven by economic and financial incentives and, second, that it is driven by technological developments. For example, one respondent highlighted that \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e could increase productivity and thus the profitability of companies. Respondents also highlighted technological developments as a driver for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e demand, with two respondents stating that companies’ “fear of missing out” in new technologies could drive their demand for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems. \u003c/p\u003e\u003cp\u003eRespondents that disagreed often suggested that increasing \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e demand and adoption comes at the cost of safeguarding the public and risk mitigation.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eQuestion 3: Do you have any additional evidence to support the following estimates and assumptions across the framework?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003eSummary of responses:\u003c/p\u003e\u003cp\u003eRespondents reacted to each statement differently. There was a mixed amount of agreement across all statements. In written feedback, some respondents suggested that our estimates and assumptions depend on complex factors or that it is not possible to provide estimates about \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e due to too many uncertainties.\u003c/p\u003e\u003cp\u003eFor the first estimation, that 431,671 businesses will be impacted by adopting/consuming \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e less than the estimated 3,170 businesses supplying/producing \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, disagreeing respondents found that it understates the number of businesses that will likely be affected by \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, that the number can rapidly change as it is easy to integrate \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e into a product or service, that the division between \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e adopters and producers is somewhat artificial, and that consumers should also be considered. \u003c/p\u003e\u003cp\u003eFor the second statement, saying that those who adopt/consume \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e products and services will face lower costs than those who produce and/or supply \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e solutions products and services, there was some disagreement and one response that agreed. Those who disagreed with the statement argued that consumers of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e will have lower costs than producers of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e since consumers and users more widely can face (increasing) costs of using \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e applications. On the other hand, one respondent mentioned that cost savings will apply to users without a deep understanding of the technology and producers will face high salary costs because of a small pool of labour talent able to operate advanced \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems.\u003c/p\u003e\u003cp\u003eConcerning the third estimate of familiarisation costs (here referring to the cost of businesses upskilling employees in new regulation) landing in the range of £2.7 million to £33.7 million, a couple of respondents that disagreed stated that familiarisation costs could vary from business to business. These respondents argued the current range was understating the full costs and recommended considering other costs. Some suggested that consumers need to be trained on residual risk and how to overcome automation bias. Others mentioned that the independent audit of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems will create many new highly-trained jobs.\u003c/p\u003e\u003cp\u003eFinally, on the fourth estimation that compliance costs (here reflecting the cost of businesses adjusting business elements to comply with new standards) will land in the range of  £107 million to £6.7 billion, there was further disagreement. Some respondents said that compliance costs should be as low as possible, but there was no agreement on how best to achieve this. Other respondents stated that companies will not comply and that compliance would necessitate new business activities.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eQuestion 4: Do you agree with the estimates associated with the central functions?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003eSummary of responses:\u003c/p\u003e\u003cp\u003eA slight majority of respondents somewhat disagreed with the estimates outlined in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation impact assessment, suggesting that central function estimates are too high. Some respondents mentioned that the central function could deploy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and use automation to harness efficiency and drive down cost estimates. Two respondents also highlighted that the central function could employ techniques such as peer-to-peer learning and networks to drive down cost estimates.\u003c/p\u003e\u003cp\u003eOn the other hand, some respondents indicated that central function estimates are too low. Some respondents believe that the current estimates are too low because they do not account for costs associated with late upskilling of central function employees. One respondent suggested that the increasing demand for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e from the commercial sector would raise costs further, and create challenges in the central function accessing \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e solutions due to inflationary cost pressure. Some respondents suggested that the expanding scale and capabilities of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e would require a larger central function to regulate the technology, arguing current costs are likely to be conservative estimates.\u003c/p\u003e\u003cp\u003eA few respondents did agree that the estimates are accurate. However, many noted that it would be a challenge to pin a specific number to the estimates associated with the central function, and suggested that a lack of clarity in defining terms made it difficult to assess accuracy of the estimates.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eQuestion 5: Are you aware of any alternative metrics to measure the policy objectives?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003eSummary of responses:\u003c/p\u003e\u003cp\u003eMore than a third of respondents suggested alternative metrics that could be used to measure the policy objectives. Some suggestions included tracking the number of models being audited for bias and fairness; the number of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-related incidents being reported and investigated; and metrics related to the framework’s operation such as the number of regulators publishing guidance, the nature of guidance and associated outcomes for organisations that have adopted it, or sentiment indicators from stakeholders. Other suggestions included tracking public trust and acceptance of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems.\u003c/p\u003e\u003cp\u003eAlmost a quarter of respondents suggested existing frameworks and models. A couple of respondents suggested that effective assessment and regulation of harm would be key to measuring the policy objectives.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eQuestion 6: Do you believe that some \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems would be prohibited in options 1 and 2 due to increased regulatory scrutiny?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003eSummary of responses:\u003c/p\u003e\u003cp\u003eOver half of respondents agreed that some \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems would be prohibited in options 1 and 2 due to increased regulatory scrutiny. Around a quarter of respondents disagreed and just under a third were unsure.\u003c/p\u003e\u003cp\u003eOf respondents that expanded on their thoughts, a third suggested that some \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems present a threat to society and should be prohibited. These respondents emphasised that prohibition would reduce \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e risks and saw prohibition as a positive impact. Some suggested that a lack of any prohibition would represent a failure of the regulatory framework.\u003c/p\u003e\u003cp\u003eSome stakeholders suggested that some \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems would be prohibited. However, a similar amount suggested that the regulatory scrutiny under options 1 and 2 would not be sufficient enough to prohibit \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems. These two sets of responses reflected conflicting understanding around the intensity of the proposed regulations, as opposed to inherent views on how regulation might impact the sector. A few indicated that the impact assessment was unable to provide enough evidence around which \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems might be prohibited.\u003c/p\u003e\u003cp\u003e\u003cstrong\u003eQuestion 7: Do you agree with our assessment of each policy option against the objectives?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003eSummary of responses:\u003c/p\u003e\u003cp\u003eJust over a third of respondents either strongly or somewhat agreed with the assessment of each policy option against objectives, with most responding that they somewhat agree. A similar amount either strongly or somewhat disagreed, with most of these responding that they only somewhat disagreed. Around a quarter of respondents neither agreed nor disagreed, or indicated they were unsure.   \u003c/p\u003e\u003cp\u003e\u003cstrong\u003eQuestion 8: Do you have any additional evidence that proves or disproves our analysis in the impact assessment?\u003c/strong\u003e\u003c/p\u003e\u003cp\u003eSummary of responses:\u003c/p\u003e\u003cp\u003eAlmost half of written responses suggested that the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation impact assessment insufficiently estimated the impacts of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e. These respondents indicated that the impacts of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e are much larger and more harmful than is implied by the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation impact assessment and white paper.\u003c/p\u003e\u003cp\u003eJust under a third indicated that the government should act quickly to regulate emerging \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e technologies. These respondents emphasised that timely action should be a key focus for \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation given the quickly advancing capabilities of the technology.\u003c/p\u003e\u003cp\u003eSome respondents indicated that there was too great a degree of uncertainty to make accurate assessments. These respondents thought that any estimation would be inaccurate due to the nature of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and the many uncertainties around future developments.\u003c/p\u003e\u003cp\u003eSome respondents suggested that regulators should harmonise their approach to \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, emphasising that the use of these technologies across sectors requires coordinated and consistent regulation.\u003c/p\u003e\u003cdiv class=\"footnotes\" role=\"doc-endnotes\"\u003e\n \u003col\u003e\n \u003cli id=\"fn:1\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://www.trade.gov/market-intelligence/united-kingdom-artificial-intelligence-market-2023\" class=\"govuk-link\"\u003eUnited Kingdom Artificial Intelligence Market\u003c/a\u003e, US International Trade Administration, 2023. \u003ca href=\"#fnref:1\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:2\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper\" class=\"govuk-link\"\u003eFrontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e: capabilities and risks\u003c/a\u003e, Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:2\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:3\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/news/new-advisory-service-to-help-businesses-launch-ai-and-digital-innovations\" class=\"govuk-link\"\u003eNew advisory service to help businesses launch \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and digital innovations\u003c/a\u003e, Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:3\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:4\" role=\"doc-endnote\"\u003e\n \u003cp\u003eTo support the government’s planning and policy development, and given the material uncertainties that exist, the Government Office for Science has prepared a foresight report outlining possible scenarios that may arise in the context of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e development, proliferation and impact in 2030. See: \u003ca href=\"https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper/future-risks-of-frontier-ai-annex-a\" class=\"govuk-link\"\u003eFuture risks of frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e (Annex A),\u003c/a\u003e Government Office for Science, 2023. A full report on the scenarios will be published shortly (this report will not be a statement of government policy). \u003ca href=\"#fnref:4\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:5\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/speeches/prime-ministers-speech-on-ai-26-october-2023\" class=\"govuk-link\"\u003ePrime Minister’s speech on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e: 26 October 2023\u003c/a\u003e, Prime Minister’s Office, 10 Downing Street, 2023. \u003ca href=\"#fnref:5\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:6\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/international-science-partnerships-fund-ispf\" class=\"govuk-link\"\u003eInternational Science Partnerships Fund\u003c/a\u003e, \u003cabbr title=\"UK Research and Innovation\"\u003eUKRI\u003c/abbr\u003e, 2023. \u003ca href=\"#fnref:6\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:7\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://openai.com/blog/how-should-ai-systems-behave\" class=\"govuk-link\"\u003eHow should \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems behave, and who should decide?\u003c/a\u003e, OpenAI, 2023. \u003ca href=\"#fnref:7\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:8\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach\" class=\"govuk-link\"\u003eSafety and Security Risks of Generative Artificial Intelligence to 2025\u003c/a\u003e, Government Office for Science, 2023. \u003ca href=\"#fnref:8\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:9\" role=\"doc-endnote\"\u003e\n \u003cp\u003eWe provide further detail on this area as part of our description of the cross-sectoral safety, security and robustness principle in the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation white paper. See: \u003ca href=\"https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach\" class=\"govuk-link\"\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation: a pro-innovation approach,\u003c/a\u003e Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:9\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:10\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper\" class=\"govuk-link\"\u003eFrontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e: capabilities and risks\u003c/a\u003e, Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:10\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:11\" role=\"doc-endnote\"\u003e\n \u003cp\u003eLarge dedicated \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e companies make a major contribution to the UK economy, with \u003cabbr title=\"gross value added\"\u003eGVA\u003c/abbr\u003e (gross value added) per employee estimated to be £400,000, more than double that of comparable estimates of large dedicated firms in other sectors. See: \u003ca href=\"https://www.gov.uk/government/publications/artificial-intelligence-sector-study-2022\" class=\"govuk-link\"\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Sector Study 2022\u003c/a\u003e, Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:11\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:12\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://www.tortoisemedia.com/intelligence/global-ai/\" class=\"govuk-link\"\u003eThe Global \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Index\u003c/a\u003e Tortoise Media, 2023. \u003ca href=\"#fnref:12\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:13\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach\" class=\"govuk-link\"\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation: a pro-innovation approach\u003c/a\u003e, Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:13\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:14\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals\" class=\"govuk-link\"\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation: a pro-innovation approach – policy proposals\u003c/a\u003e, Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:14\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:15\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper\" class=\"govuk-link\"\u003eFrontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e: capabilities and risks\u003c/a\u003e, Department for Science, Innovation, and Technology, 2023. \u003ca href=\"#fnref:15\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:16\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://www.reuters.com/technology/race-towards-autonomous-ai-agents-grips-silicon-valley-2023-07-17/\" class=\"govuk-link\"\u003eRace towards ‘autonomous’ \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e agents grips Silicon Valley\u003c/a\u003e, Anna Tong and Jeffrey Dastin, 2023; \u003ca rel=\"external\" href=\"https://openai.com/blog/introducing-superalignment\" class=\"govuk-link\"\u003eIntroducing superalignment\u003c/a\u003e, Jan Leike and Ilya Sutskever (OpenAI), 2023; \u003ca rel=\"external\" href=\"https://deepmind.google/about/\" class=\"govuk-link\"\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e could be one of humanity’s most useful inventions\u003c/a\u003e, Google Deepmind, n.d.. \u003ca href=\"#fnref:16\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:17\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://www.oecd.org/employment-outlook/2023/#ai-jobs\" class=\"govuk-link\"\u003eEmployment Outlook 2023: artificial intelligence and jobs\u003c/a\u003e, \u003cabbr title=\"Organisation for Economic Co-operation and Development\"\u003eOECD\u003c/abbr\u003e, 2023. \u003ca href=\"#fnref:17\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:18\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://assets.kpmg.com/content/dam/kpmg/uk/pdf/2023/06/generative-ai-and-the-uk-labour-market.pdf\" class=\"govuk-link\"\u003eGenerative \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and the UK labour market,\u003c/a\u003e KPMG, 2023; \u003ca rel=\"external\" href=\"https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#key-insights\" class=\"govuk-link\"\u003eThe economic potential of generative \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e: the next productivity frontier\u003c/a\u003e, McKinsey, 2023; \u003ca rel=\"external\" href=\"https://www.ifow.org/publications/adoption-of-ai-in-uk-firms-and-the-consequences-for-jobs\" class=\"govuk-link\"\u003eWhat drives UK firms to adopt \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and robotics, and what are the consequences for jobs?,\u003c/a\u003e Institute for the Future of Work, 2023. \u003ca href=\"#fnref:18\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:19\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://www.forbes.com/sites/cindygordon/2023/02/02/chatgpt-is-the-fastest-growing-ap-in-the-history-of-web-applications/\" class=\"govuk-link\"\u003eChatGPT is the fastest growing app in the history of web applications\u003c/a\u003e, Cindy Gordon, 2023. \u003ca href=\"#fnref:19\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:20\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://www.zsl.org/news-and-events/feature/using-ai-monitor-trackside-britains-wildlife\" class=\"govuk-link\"\u003eUsing \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e to monitor trackside Britain’s wildlife\u003c/a\u003e, Zoological Society London, 2023. \u003ca href=\"#fnref:20\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:21\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://www.nature.com/articles/s41586-023-06555-x\" class=\"govuk-link\"\u003eA foundation model for generalizable disease detection from retinal images\u003c/a\u003e, Esma Aïmeur et al., 2023. \u003ca href=\"#fnref:21\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:22\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://dl.acm.org/doi/pdf/10.1145/3544548.3581318\" class=\"govuk-link\"\u003eSynthetic lies: understanding \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e-generated misinformation and evaluating algorithmic and human solutions\u003c/a\u003e, Jiawei Zhou et al., 2023; \u003ca rel=\"external\" href=\"https://link.springer.com/article/10.1007/s13278-023-01028-5#Sec16\" class=\"govuk-link\"\u003eFake news, disinformation and misinformation in social media: a review,\u003c/a\u003e Yukon Zhou et al., 2023; \u003ca rel=\"external\" href=\"https://arxiv.org/pdf/2306.12807.pdf\" class=\"govuk-link\"\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e could create a perfect storm of climate misinformation\u003c/a\u003e, Victor Galaz et al., 2023. \u003ca href=\"#fnref:22\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:23\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://www.nature.com/articles/s42256-022-00465-9\" class=\"govuk-link\"\u003eDual use of artificial-intelligence-powered drug discovery\u003c/a\u003e, Fabio Urbina et al., 2022. \u003ca href=\"#fnref:23\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:24\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach\" class=\"govuk-link\"\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation: a pro-innovation approach\u003c/a\u003e, Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:24\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:25\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/cma-cases/ai-foundation-models-initial-review\" class=\"govuk-link\"\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Foundation Models: initial review\u003c/a\u003e, \u003cabbr title=\"Competition and Markets Authority\"\u003eCMA\u003c/abbr\u003e, 2023. \u003ca href=\"#fnref:25\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:26\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-do-we-ensure-fairness-in-ai/\" class=\"govuk-link\"\u003eHow do we ensure fairness in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e?\u003c/a\u003e, \u003cabbr title=\"Information Commissioner's Office\"\u003eICO\u003c/abbr\u003e, 2023. \u003ca href=\"#fnref:26\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:27\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/software-and-ai-as-a-medical-device-change-programme/software-and-ai-as-a-medical-device-change-programme-roadmap\" class=\"govuk-link\"\u003eSoftware and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e as a Medical Device Change Programme – Roadmap\u003c/a\u003e, \u003cabbr title=\"Medicines and Healthcare products Regulatory Agency\"\u003eMHRA\u003c/abbr\u003e, updated 2023 [2021]. \u003ca href=\"#fnref:27\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:28\" role=\"doc-endnote\"\u003e\n \u003cp\u003eThe government has written to the Office of Communications (\u003cabbr title=\"Office of Communications\"\u003eOfcom\u003c/abbr\u003e); Information Commissioner’s Office (\u003cabbr title=\"Information Commissioner's Office\"\u003eICO\u003c/abbr\u003e); Financial Conduct Authority (\u003cabbr title=\"Financial Conduct Authority\"\u003eFCA\u003c/abbr\u003e); Competition and Markets Authority (\u003cabbr title=\"Competition and Markets Authority\"\u003eCMA\u003c/abbr\u003e); Equality and Human Rights Commission (\u003cabbr title=\"Equality and Human Rights Commission\"\u003eEHRC\u003c/abbr\u003e); Medicines and Healthcare products Regulatory Agency (\u003cabbr title=\"Medicines and Healthcare products Regulatory Agency\"\u003eMHRA\u003c/abbr\u003e); Office for Standards in Education, Children’s Services and Skills (\u003cabbr title=\"Office for Standards in Education, Children’s Services and Skills\"\u003eOfsted\u003c/abbr\u003e); Legal Services Board (\u003cabbr title=\"Legal Services Board\"\u003eLSB\u003c/abbr\u003e); Office for Nuclear Regulation (\u003cabbr title=\"Office for Nuclear Regulation\"\u003eONR\u003c/abbr\u003e); Office of Qualifications and Examinations Regulation (\u003cabbr title=\"Office of Qualifications and Examinations Regulation\"\u003eOfqual\u003c/abbr\u003e); Health and Safety Executive (\u003cabbr title=\"Health and Safety Executive\"\u003eHSE\u003c/abbr\u003e); Bank of England; and Office of Gas and Electricity Markets (\u003cabbr title=\"Office of Gas and Electricity Markets\"\u003eOfgem\u003c/abbr\u003e). The Office for Product Safety and Standards (\u003cabbr title=\"Office for Product Safety and Standards\"\u003eOPSS\u003c/abbr\u003e), which sits within the Department for Business and Trade, has also been asked to produce an update. Regulators will be best placed to determine the form and substance of their update and we encourage all regulators that consider \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e to be relevant to their work to publish their approaches. As we continue to implement the framework and assess regulator readiness, our prioritisation of regulators may change to reflect evolving factors such as our risk analysis. We will also work with other regulators and encourage the publication of action plans to drive transparency across the wider ecosystem. \u003ca href=\"#fnref:28\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:29\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/pro-innovation-regulation-of-technologies-review-cross-cutting\" class=\"govuk-link\"\u003eResponse to Professor Dame Angela McLean’s Pro-Innovation Regulation of Technologies Review: Cross Cutting\u003c/a\u003e, HM Treasury, 2023. \u003ca href=\"#fnref:29\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:30\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://www.ukri.org/news/250m-to-secure-the-uks-world-leading-position-in-technologies-of-tomorrow/\" class=\"govuk-link\"\u003e£250 million to secure the UK’s world-leading position in technologies of tomorrow\u003c/a\u003e, \u003cabbr title=\"UK Research and Innovation\"\u003eUKRI\u003c/abbr\u003e, 2023. \u003ca href=\"#fnref:30\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:31\" role=\"doc-endnote\"\u003e\n \u003cp\u003eMembers of the \u003cabbr title=\"Digital Regulation Cooperation Forum\"\u003eDRCF\u003c/abbr\u003e include the \u003cabbr title=\"Competition and Markets Authority\"\u003eCMA\u003c/abbr\u003e, \u003cabbr title=\"Information Commissioner's Office\"\u003eICO\u003c/abbr\u003e, \u003cabbr title=\"Financial Conduct Authority\"\u003eFCA\u003c/abbr\u003e, and \u003cabbr title=\"Office of Communications\"\u003eOfcom\u003c/abbr\u003e. See: \u003ca href=\"https://www.gov.uk/government/news/new-advisory-service-to-help-businesses-launch-ai-and-digital-innovations\" class=\"govuk-link\"\u003eNew advisory service to help businesses launch \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and digital innovations\u003c/a\u003e, Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:31\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:32\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://aistandardshub.org/\" class=\"govuk-link\"\u003eThe \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Standards Hub\u003c/a\u003e, \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Standards Hub, 2022. \u003ca href=\"#fnref:32\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:33\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/guidance/cdei-portfolio-of-ai-assurance-techniques\" class=\"govuk-link\"\u003ePortfolio of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Assurance Techniques\u003c/a\u003e, Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:33\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:34\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-3\" class=\"govuk-link\"\u003ePublic attitudes to data and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e: Tracker survey (Wave 3)\u003c/a\u003e, Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:34\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:35\" role=\"doc-endnote\"\u003e\n \u003cp\u003eWe have previously categorised these as societal harms; misuse risks; and loss of control. See: \u003ca href=\"https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper\" class=\"govuk-link\"\u003eFrontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e: capabilities and risks\u003c/a\u003e, Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:35\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:36\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://braiduk.org/\" class=\"govuk-link\"\u003eBridging Responsible \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Divides\u003c/a\u003e, \u003cabbr title=\"Bridging Responsible AI Divides\"\u003eBRAID\u003c/abbr\u003e UK, 2024. \u003ca href=\"#fnref:36\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:37\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://iuk.ktn-uk.org/news/ai-skills-for-business-guidance-feedback-consultation-call-from-the-alan-turing-institute/\" class=\"govuk-link\"\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Skills for Business Guidance: Feedback Consultation Call from The Alan Turing Institute\u003c/a\u003e, Innovate UK, 2023. \u003ca href=\"#fnref:37\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:38\" role=\"doc-endnote\"\u003e\n \u003cp\u003eA recent study by the Institute for the Future of Work shows that the net impact on skills and job creation for UK firms that have adopted \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and robotics technologies is positive. However, these positive impacts on jobs and job quality are associated with the levels of readiness within a firm. See: \u003ca rel=\"external\" href=\"https://www.ifow.org/publications/adoption-of-ai-in-uk-firms-and-the-consequences-for-jobs\" class=\"govuk-link\"\u003eWhat drives UK firms to adopt \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and robotics, and what are the consequences for jobs?\u003c/a\u003e, Institute for the Future of Work, 2023. \u003ca href=\"#fnref:38\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:39\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/the-impact-of-ai-on-uk-jobs-and-training\" class=\"govuk-link\"\u003eThe impact of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e on UK jobs and training\u003c/a\u003e, Department for Education, 2023. \u003ca href=\"#fnref:39\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:40\" role=\"doc-endnote\"\u003e\n \u003cp\u003eApprenticeships are for people aged 16 and over who are not in full time education. See: \u003ca href=\"https://www.gov.uk/apply-apprenticeship\" class=\"govuk-link\"\u003eFind an apprenticeship,\u003c/a\u003e Department for Education, n.d.. \u003ca href=\"#fnref:40\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:41\" role=\"doc-endnote\"\u003e\n \u003cp\u003eSkills Bootcamps are for adults aged 19 and over. See: \u003ca href=\"https://www.gov.uk/guidance/find-a-skills-bootcamp\" class=\"govuk-link\"\u003eFind a skills bootcamp\u003c/a\u003e, Department for Education, 2024 [2022]. \u003ca href=\"#fnref:41\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:42\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/lifelong-learning-entitlement-lle-overview/lifelong-learning-entitlement-overview\" class=\"govuk-link\"\u003eLifelong Learning Entitlement overview\u003c/a\u003e, Department for Education, 2024. \u003ca href=\"#fnref:42\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:43\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/a-world-class-education-system-the-advanced-british-standard\" class=\"govuk-link\"\u003eA world-class education system: The Advanced British Standard\u003c/a\u003e, Department for Education, 2023. \u003ca href=\"#fnref:43\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:44\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/creative-industries-sector-vision\" class=\"govuk-link\"\u003eCreative Industries Sector Vision\u003c/a\u003e, Department for Culture, Media and Sport, 2023. \u003ca href=\"#fnref:44\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:45\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper\" class=\"govuk-link\"\u003eFrontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e: capabilities and risks\u003c/a\u003e, Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:45\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:46\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://link.springer.com/article/10.1007/s00146-023-01676-3\" class=\"govuk-link\"\u003eAlgorithmic discrimination in the credit domain: what do we know about it?\u003c/a\u003e, Ana Cristina Bicharra Garcia et al., 2023. \u003ca href=\"#fnref:46\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:47\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://www.nature.com/articles/s41599-023-02079-x\" class=\"govuk-link\"\u003eEthics and discrimination in artificial intelligence-enabled recruitment practices,\u003c/a\u003e Zhisheng Chen, 2023. \u003ca href=\"#fnref:47\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:48\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://fairnessinnovationchallenge.co.uk/\" class=\"govuk-link\"\u003eFairness Innovation Challenge\u003c/a\u003e, Department for Science, Innovation and Technology; Innovate UK, 2023. \u003ca href=\"#fnref:48\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:49\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/\" class=\"govuk-link\"\u003eGuidance on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and data protection\u003c/a\u003e, \u003cabbr title=\"Information Commissioner's Office\"\u003eICO\u003c/abbr\u003e, 2023. \u003ca href=\"#fnref:49\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:50\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://science.police.uk/delivery/resources/covenant-for-using-artificial-intelligence-ai-in-policing/\" class=\"govuk-link\"\u003eCovenant for Using Artificial Intelligence (\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e) in Policing\u003c/a\u003e, National Police Chiefs’ Council, n.d.. \u003ca href=\"#fnref:50\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:51\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper\" class=\"govuk-link\"\u003eFrontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e: capabilities and risks,\u003c/a\u003e Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:51\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:52\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://misinforeview.hks.harvard.edu/article/misinformation-in-action-fake-news-exposure-is-linked-to-lower-trust-in-media-higher-trust-in-government-when-your-side-is-in-power/\" class=\"govuk-link\"\u003eMisinformation in action: Fake news exposure is linked to lower trust in media, higher trust in government when your side is in power\u003c/a\u003e, Katherine Ognyanova et al., 2020. \u003ca href=\"#fnref:52\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:53\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-ai-safety\" class=\"govuk-link\"\u003eEmerging Processes for Frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety,\u003c/a\u003e Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:53\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:54\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/cma-cases/ai-foundation-models-initial-review\" class=\"govuk-link\"\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Foundation Models: initial review,\u003c/a\u003e \u003cabbr title=\"Competition and Markets Authority\"\u003eCMA\u003c/abbr\u003e, 2023. \u003ca href=\"#fnref:54\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:55\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://oxfordinsights.com/wp-content/uploads/2023/12/2023-Government-AI-Readiness-Index-2.pdf\" class=\"govuk-link\"\u003eGovernment \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Readiness Index 202\u003c/a\u003e3, Oxford Insights, 2023 \u003ca href=\"#fnref:55\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:56\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/news/chancellor-to-cut-admin-workloads-to-free-up-frontline-staff\" class=\"govuk-link\"\u003eChancellor to cut admin workloads to free up frontline staff,\u003c/a\u003e HM Treasury; Home Office, 2023. \u003ca href=\"#fnref:56\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:57\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/news/21-million-to-roll-out-artificial-intelligence-across-the-nhs\" class=\"govuk-link\"\u003e£21 million to roll out artificial intelligence across the \u003cabbr title=\"National Health Service\"\u003eNHS\u003c/abbr\u003e\u003c/a\u003e, Department of Health and Social Care, 2023. \u003ca href=\"#fnref:57\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:58\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/calls-for-evidence/generative-artificial-intelligence-in-education-call-for-evidence\" class=\"govuk-link\"\u003eGenerative artificial intelligence in education call for evidence\u003c/a\u003e, Department for Education, 2023. \u003ca href=\"#fnref:58\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:59\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/generative-ai-framework-for-hmg/generative-ai-framework-for-hmg-html\" class=\"govuk-link\"\u003eGenerative \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Framework for HMG\u003c/a\u003e, Cabinet Office and Central Digital and Data Office, 2024. \u003ca href=\"#fnref:59\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:60\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://www.ncsc.gov.uk/section/advice-guidance/all-topics?topics=Artificial%20intelligence\u0026amp;sort=date%2Bdesc\" class=\"govuk-link\"\u003eArtificial Intelligence,\u003c/a\u003e National Cyber Security Centre, n.d.. \u003ca href=\"#fnref:60\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:61\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper\" class=\"govuk-link\"\u003eFrontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e: capabilities and risks\u003c/a\u003e, Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:61\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:62\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://www.nature.com/articles/s42256-022-00465-9\" class=\"govuk-link\"\u003eDual use of artificial-intelligence-powered drug discovery,\u003c/a\u003e Fabio Urbina et al., 2022. \u003ca href=\"#fnref:62\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:63\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper\" class=\"govuk-link\"\u003eFrontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e: capabilities and risks\u003c/a\u003e, Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:63\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:64\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/uk-biological-security-strategy\" class=\"govuk-link\"\u003eUK Biological Security Strategy,\u003c/a\u003e Cabinet Office, 2023 \u003ca href=\"#fnref:64\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:65\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/national-vision-for-engineering-biology\" class=\"govuk-link\"\u003eNational vision for engineering biology,\u003c/a\u003e Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:65\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:66\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper\" class=\"govuk-link\"\u003eFrontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e: capabilities and risks\u003c/a\u003e, Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:66\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:67\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://cdn.openai.com/papers/practices-for-governing-agentic-ai-systems.pdf\" class=\"govuk-link\"\u003ePractices for Governing Agentic \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Systems\u003c/a\u003e, Yonadav Shavit et al., 2023. \u003ca href=\"#fnref:67\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:68\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper\" class=\"govuk-link\"\u003eFrontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e: capabilities and risks,\u003c/a\u003e Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:68\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:69\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://www.reuters.com/technology/race-towards-autonomous-ai-agents-grips-silicon-valley-2023-07-17/\" class=\"govuk-link\"\u003eRace towards ‘autonomous’ \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e agents grips Silicon Valley\u003c/a\u003e, Anna Tong and Jeffrey Dastin, 2023; \u003ca rel=\"external\" href=\"https://openai.com/blog/introducing-superalignment\" class=\"govuk-link\"\u003eIntroducing superalignment\u003c/a\u003e, Jan Leike and Ilya Sutskever (OpenAI), 2023; \u003ca rel=\"external\" href=\"https://deepmind.google/about/\" class=\"govuk-link\"\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e could be one of humanity’s most useful inventions\u003c/a\u003e, Google Deepmind, n.d.. \u003ca href=\"#fnref:69\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:70\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://assets.publishing.service.gov.uk/media/653bc393d10f3500139a6ac5/future-risks-of-frontier-ai-annex-a.pdf\" class=\"govuk-link\"\u003eFuture Risks of Frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e,\u003c/a\u003e Government Office for Science, 2023. \u003ca href=\"#fnref:70\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:71\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/speeches/prime-ministers-speech-on-ai-26-october-2023\" class=\"govuk-link\"\u003ePrime Minister’s speech on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e: 26 October 2023\u003c/a\u003e, Prime Minister’s Office, 10 Downing Street, 2023. \u003ca href=\"#fnref:71\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:72\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://futureoflife.org/open-letter/pause-giant-ai-experiments/\" class=\"govuk-link\"\u003ePause Giant \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Experiments: An Open Letter\u003c/a\u003e, Future of Life Institute, 2023. \u003ca href=\"#fnref:72\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:73\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach\" class=\"govuk-link\"\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation: a pro-innovation approach\u003c/a\u003e, Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:73\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:74\" role=\"doc-endnote\"\u003e\n \u003cp\u003eWe note, for instance, the enforcement action of the \u003cabbr title=\"Information Commissioner's Office\"\u003eICO\u003c/abbr\u003e who have used data protection law to hold organisations using \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e systems that process personal data to account for breaches of data protection law. The \u003cabbr title=\"Competition and Markets Authority\"\u003eCMA\u003c/abbr\u003e’s initial review of foundation models notes that accountability for obligations under competition and consumer law applies across the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e life cycle to both developers and deployers. See: \u003ca href=\"https://www.gov.uk/cma-cases/ai-foundation-models-initial-review\" class=\"govuk-link\"\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Foundation Models: initial review,\u003c/a\u003e \u003cabbr title=\"Competition and Markets Authority\"\u003eCMA\u003c/abbr\u003e, 2023. Similarly, the Medicines and Medical Devices Act 2021 gives the \u003cabbr title=\"Medicines and Healthcare products Regulatory Agency\"\u003eMHRA\u003c/abbr\u003e enforcement powers sufficient to hold manufacturers of medical devices accountable, including the power to require that unsafe devices are removed from the market. In addition, enforcement of serious non-compliance can, where appropriate, result in criminal prosecution through the courts. \u003ca href=\"#fnref:74\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:75\" role=\"doc-endnote\"\u003e\n \u003cp\u003eThe same model may be deployed directly by the developer and also integrated into an almost limitless variety of systems, products and tools that will fall under the remit of multiple regulators. \u003ca href=\"#fnref:75\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:76\" role=\"doc-endnote\"\u003e\n \u003cp\u003eThe law may allocate liability to “Quantum Talent Technologies” in this scenario if the actor has established an “agency” relationship according to equality law or was privately contractually obligated to abide by equality law. The law may also attribute liability along the supply chain in negligence if there is a duty of care that has been breached causing foreseeable damage. However, some laws only apply to actors based in the UK. In this scenario, data protection law would apply, allowing the \u003cabbr title=\"Information Commissioner's Office\"\u003eICO\u003c/abbr\u003e to take enforcement action for any failure by a relevant data controller (such as “Count Your Pennies Ltd”) to process personal data fairly and lawfully. \u003ca href=\"#fnref:76\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:77\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/guidance/equality-act-2010-guidance\" class=\"govuk-link\"\u003eEquality Act 2010: guidance\u003c/a\u003e, Government Equalities Office and Equality and Human Rights Commission, 2015 [2013]. \u003ca href=\"#fnref:77\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:78\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://www.aisafetysummit.gov.uk/policy-updates/#company-policies\" class=\"govuk-link\"\u003eCompany Policies\u003c/a\u003e, \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit, 2023. \u003ca href=\"#fnref:78\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:79\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-ai-safety#data-input-controls-and-audits\" class=\"govuk-link\"\u003eEmerging Processes for Frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety,\u003c/a\u003e Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:79\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:80\" role=\"doc-endnote\"\u003e\n \u003cp\u003eResponsible capability scaling is an emerging framework to manage risks associated with highly capable \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and guide decision-making about \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e development and deployment. See: \u003ca href=\"https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-ai-safety#responsible-capability-scaling\" class=\"govuk-link\"\u003eResponsible Capability Scaling\u003c/a\u003e in \u003ca href=\"https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-ai-safety\" class=\"govuk-link\"\u003eEmerging Processes for Frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety\u003c/a\u003e. Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:80\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:81\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/news/international-expertise-to-drive-international-ai-safety-report\" class=\"govuk-link\"\u003eInternational expertise to drive International \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Report\u003c/a\u003e, Department for Science, Innovation and Technology, 2024. \u003ca href=\"#fnref:81\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:82\" role=\"doc-endnote\"\u003e\n \u003cp\u003eTo support the government’s planning and policy development, and given the material uncertainties that exist, the Government Office for Science has prepared a foresight report outlining possible scenarios that may arise in the context of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e development, proliferation and impact in 2030. See: \u003ca href=\"https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper/future-risks-of-frontier-ai-annex-a\" class=\"govuk-link\"\u003eFuture risks of frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e (Annex A),\u003c/a\u003e Government Office for Science, 2023. A full report on the scenarios will be published shortly (this report will not be a statement of government policy). \u003ca href=\"#fnref:82\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:83\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach\" class=\"govuk-link\"\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation: a pro-innovation approach\u003c/a\u003e, Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:83\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:84\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/uk-international-technology-strategy\" class=\"govuk-link\"\u003eUK International Technology Strategy,\u003c/a\u003e Foreign, Commonwealth \u0026amp; Development Office, 2023. \u003ca href=\"#fnref:84\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:85\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/speeches/prime-ministers-speech-on-ai-26-october-2023\" class=\"govuk-link\"\u003ePrime Minister’s speech on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e: 26 October 2023\u003c/a\u003e, Prime Minister’s Office, 10 Downing Street, 2023. \u003ca href=\"#fnref:85\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:86\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023\" class=\"govuk-link\"\u003eThe Bletchley Declaration by Countries Attending the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit, 1-2 November 2023\u003c/a\u003e, Department for Science, Innovation, and Technology; Foreign, Commonwealth and Development Office; Prime Minister’s Office, 10 Downing Street, 2023. \u003ca href=\"#fnref:86\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:87\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/news/world-leaders-top-ai-companies-set-out-plan-for-safety-testing-of-frontier-as-first-global-ai-safety-summit-concludes\" class=\"govuk-link\"\u003eWorld leaders, top \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e companies set out plan for safety testing of frontier as first global \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit concludes\u003c/a\u003e, Prime Minister’s Office, 10 Downing Street; Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:87\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:88\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/news/international-expertise-to-drive-international-ai-safety-report\" class=\"govuk-link\"\u003eInternational expertise to drive International \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Report\u003c/a\u003e, Department for Science, Innovation and Technology, 2024. \u003ca href=\"#fnref:88\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:89\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://www.mofa.go.jp/ecm/ec/page5e_000076.html\" class=\"govuk-link\"\u003e\u003cabbr title=\"Group of Seven\"\u003eG7\u003c/abbr\u003e Leaders’ Statement on the Hiroshima \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Process\u003c/a\u003e, Ministry of Foreign Affairs Government of Japan, 2023. \u003ca href=\"#fnref:89\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:90\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://www.mea.gov.in/bilateral-documents.htm?dtl/37084/G20_New_Delhi_Leaders_Declaration\" class=\"govuk-link\"\u003e\u003cabbr title=\"Group of 20\"\u003eG20\u003c/abbr\u003e New Delhi Leaders’ Declaration\u003c/a\u003e, Ministry of External Affairs Government of India, 2023 \u003ca href=\"#fnref:90\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:91\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://sdgs.un.org/goals\" class=\"govuk-link\"\u003eThe 17 goals\u003c/a\u003e, United Nations, 2023. \u003ca href=\"#fnref:91\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:92\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://gpai.ai/2023-GPAI-Ministerial-Declaration.pdf\" class=\"govuk-link\"\u003e\u003cabbr title=\"Global Partnership on AI\"\u003eGPAI\u003c/abbr\u003e New Delhi Ministerial Declaration\u003c/a\u003e, Global Partnership on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, 2023. \u003ca href=\"#fnref:92\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:93\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://oecd.ai/en/ai-principles\" class=\"govuk-link\"\u003e\u003cabbr title=\"Organisation for Economic Co-operation and Development\"\u003eOECD\u003c/abbr\u003e \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Principles overview\u003c/a\u003e, \u003cabbr title=\"Organisation for Economic Co-operation and Development\"\u003eOECD\u003c/abbr\u003e, 2024. \u003ca href=\"#fnref:93\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:94\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/guidance/cdei-portfolio-of-ai-assurance-techniques\" class=\"govuk-link\"\u003e\u003cabbr title=\"Centre for Data Ethics and Innovation\"\u003eCDEI\u003c/abbr\u003e portfolio of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e assurance techniques,\u003c/a\u003e Centre for Data Ethics and Innovation; Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:94\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:95\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://oecd.ai/en/\" class=\"govuk-link\"\u003eCatalogue of tools and metrics for trustworthy \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e\u003c/a\u003e, \u003cabbr title=\"Organisation for Economic Co-operation and Development\"\u003eOECD\u003c/abbr\u003e, n.d.. \u003ca href=\"#fnref:95\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:96\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence\" class=\"govuk-link\"\u003eRecommendation on the Ethics of Artificial Intelligence\u003c/a\u003e, \u003cabbr title=\"United Nations Educational, Scientific and Cultural Organization\"\u003eUNESCO\u003c/abbr\u003e, 2023. \u003ca href=\"#fnref:96\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:97\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://www.un.org/sites/un2.un.org/files/ai_advisory_body_interim_report.pdf\" class=\"govuk-link\"\u003eGoverning \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e for Humanity,\u003c/a\u003e United Nations, 2023. \u003ca href=\"#fnref:97\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:98\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://aistandardshub.org/\" class=\"govuk-link\"\u003eThe \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Standards Hub\u003c/a\u003e, \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Standards Hub, 2022. \u003ca href=\"#fnref:98\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:99\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/news/uk-unites-with-global-partners-to-accelerate-development-using-ai\" class=\"govuk-link\"\u003eUK unites with global partners to accelerate development using \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e,\u003c/a\u003e Foreign, Commonwealth \u0026amp; Development Office, 2023. \u003ca href=\"#fnref:99\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:100\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/international-science-partnerships-fund-ispf\" class=\"govuk-link\"\u003eInternational Science Partnerships Fund\u003c/a\u003e, \u003cabbr title=\"UK Research and Innovation\"\u003eUKRI\u003c/abbr\u003e, 2023. \u003ca href=\"#fnref:100\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:101\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/the-atlantic-declaration#:~:text=On%208%20June%202023%2C%20the,the%20challenges%20of%20this%20moment.\" class=\"govuk-link\"\u003eThe Atlantic Declaration\u003c/a\u003e, Prime Minister’s Office, 10 Downing Street, Foreign, Commonwealth \u0026amp; Development Office, Department for Business and Trade, 2023.US \u003ca href=\"#fnref:101\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:102\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/the-hiroshima-accord\" class=\"govuk-link\"\u003eThe Hiroshima Accord: An enhanced UK-Japan global strategic partnership\u003c/a\u003e, Prime Minister’s Office, 10 Downing Street, 2023. \u003ca href=\"#fnref:102\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:103\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/the-downing-street-accord-a-united-kingdom-republic-of-korea-global-strategic-partnership\" class=\"govuk-link\"\u003eThe Downing Street Accord: A United Kingdom-Republic of Korea Global Strategic Partnership\u003c/a\u003e, Prime Minister’s Office, 10 Downing Street, 2023. \u003ca href=\"#fnref:103\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:104\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/uk-singapore-joint-declaration-9-september-2023/joint-declaration-by-the-prime-ministers-of-the-republic-of-singapore-and-the-united-kingdom-of-great-britain-and-northern-ireland-on-a-strategic-part#:~:text=This%20Joint%20Declaration%20on%20the,and%20prosperity%20of%20our%20countries.\" class=\"govuk-link\"\u003eJoint Declaration by the Prime Ministers of the Republic of Singapore and the United Kingdom of Great Britain and Northern Ireland on a Strategic Partnership\u003c/a\u003e, Prime Minister’s Office, 10 Downing Street, 2023. \u003ca href=\"#fnref:104\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:105\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/\" class=\"govuk-link\"\u003eGuidance on \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e and data protection\u003c/a\u003e, \u003cabbr title=\"Information Commissioner's Office\"\u003eICO\u003c/abbr\u003e, 2023. \u003ca href=\"#fnref:105\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:106\" role=\"doc-endnote\"\u003e\n \u003cp\u003eDeveloped by the Department for Science, Innovation and Technology (\u003cabbr title=\"Department for Science, Innovation and Technology\"\u003eDSIT\u003c/abbr\u003e) and Central Digital and Data Office (\u003cabbr title=\"Central Digital and Data Office\"\u003eCDDO\u003c/abbr\u003e) for the public sector. \u003ca href=\"#fnref:106\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:107\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-ai-safety\" class=\"govuk-link\"\u003eEmerging Processes for Frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety\u003c/a\u003e\u003ca href=\"https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-ai-safety#data-input-controls-and-audits\" class=\"govuk-link\"\u003e,\u003c/a\u003e Department for Science, Innovation and Technology, 2023.databases. \u003ca href=\"#fnref:107\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:108\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/cma-cases/ai-foundation-models-initial-review\" class=\"govuk-link\"\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Foundation Models: initial review,\u003c/a\u003e \u003cabbr title=\"Competition and Markets Authority\"\u003eCMA\u003c/abbr\u003e, 2023; \u003ca rel=\"external\" href=\"https://www.asa.org.uk/news/generative-ai-advertising-decoding-ai-regulation.html\" class=\"govuk-link\"\u003eGenerative \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e \u0026amp; Advertising: Decoding \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Regulation\u003c/a\u003e, ASA, 2023; \u003ca rel=\"external\" href=\"https://www.ofcom.org.uk/news-centre/2023/what-generative-ai-means-for-communications-sector\" class=\"govuk-link\"\u003eWhat generative \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e means for the communications sector\u003c/a\u003e, \u003cabbr title=\"Office of Communications\"\u003eOfcom\u003c/abbr\u003e, 2023. \u003ca href=\"#fnref:108\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:109\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-do-we-ensure-fairness-in-ai/\" class=\"govuk-link\"\u003eHow do we ensure fairness in \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e?\u003c/a\u003e, \u003cabbr title=\"Information Commissioner's Office\"\u003eICO\u003c/abbr\u003e, 2023; \u003ca href=\"https://www.gov.uk/government/publications/software-and-artificial-intelligence-ai-as-a-medical-device/software-and-artificial-intelligence-ai-as-a-medical-device\" class=\"govuk-link\"\u003eSoftware and Artificial Intelligence (\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e) as a Medical Device,\u003c/a\u003e \u003cabbr title=\"Medicines and Healthcare products Regulatory Agency\"\u003eMHRA\u003c/abbr\u003e, updated 2023 [2021]. \u003ca href=\"#fnref:109\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:110\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://assets.publishing.service.gov.uk/media/655cd137544aea0019fb31e4/_8243__Government_Response_Draft_HMG_response_to_McLean_Cross-Cutting_Base_-_November_2023_PDF.pdf\" class=\"govuk-link\"\u003eResponse to Professor Dame Angela McLean’s Pro-Innovation Regulation of Technologies Review: Cross Cutting\u003c/a\u003e, HM Treasury, 2023. \u003ca href=\"#fnref:110\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:111\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach\" class=\"govuk-link\"\u003e\u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e regulation: a pro-innovation approach\u003c/a\u003e, Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:111\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:112\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/guidance/cdei-portfolio-of-ai-assurance-techniques\" class=\"govuk-link\"\u003e\u003cabbr title=\"Centre for Data Ethics and Innovation\"\u003eCDEI\u003c/abbr\u003e portfolio of \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e assurance techniques,\u003c/a\u003e Centre for Data Ethics and Innovation; Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:112\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:113\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://fairnessinnovationchallenge.co.uk/\" class=\"govuk-link\"\u003eFairness Innovation Challenge\u003c/a\u003e, Department for Science, Innovation and Technology; InnovateUK, 2023. \u003ca href=\"#fnref:113\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:114\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-ai-safety\" class=\"govuk-link\"\u003eEmerging Processes for Frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety,\u003c/a\u003e Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:114\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:115\" role=\"doc-endnote\"\u003e\n \u003cp\u003eFor an overview of \u003cabbr title=\"Department for Science, Innovation and Technology\"\u003eDSIT\u003c/abbr\u003e’s latest research on public attitudes to data and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e, see: \u003ca href=\"https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-3\" class=\"govuk-link\"\u003ePublic attitudes to data and \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e: Tracker survey (Wave 3)\u003c/a\u003e, Department for Science, Innovation, and Technology, 2023 \u003ca href=\"#fnref:115\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:116\" role=\"doc-endnote\"\u003e\n \u003cp\u003eThe \u003cabbr title=\"Algorithmic Transparency Recording Standard\"\u003eATRS\u003c/abbr\u003e is the Algorithmic Transparency Recording Standard. For more detail see section 5.1. \u003ca href=\"#fnref:116\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:117\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper\" class=\"govuk-link\"\u003eFrontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e: capabilities and risks\u003c/a\u003e, Department for Science, Innovation, and Technology, 2023. \u003ca href=\"#fnref:117\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:118\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023\" class=\"govuk-link\"\u003eThe Bletchley Declaration by Countries Attending the \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Summit, 1-2 November 2023\u003c/a\u003e, Department for Science, Innovation, and Technology; Foreign, Commonwealth and Development Office; Prime Minister’s Office, 10 Downing Street, 2023. \u003ca href=\"#fnref:118\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:119\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/news/international-expertise-to-drive-international-ai-safety-report\" class=\"govuk-link\"\u003eInternational expertise to drive International \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety Report\u003c/a\u003e, Department for Science, Innovation and Technology, 2024. \u003ca href=\"#fnref:119\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:120\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-ai-safety\" class=\"govuk-link\"\u003eEmerging Processes for Frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety,\u003c/a\u003e Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:120\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:121\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety/emerging-processes-for-frontier-ai-safety\" class=\"govuk-link\"\u003eEmerging Processes for Frontier \u003cabbr title=\"artificial intelligence\"\u003eAI\u003c/abbr\u003e Safety,\u003c/a\u003e Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:121\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:122\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/office-for-artificial-intelligence-information-collection-and-analysis-privacy-notice/office-for-artificial-intelligence-information-collection-and-analysis-privacy-notice\" class=\"govuk-link\"\u003eOffice for Artificial Intelligence – information collection and analysis: privacy notice,\u003c/a\u003e Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:122\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:123\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca href=\"https://www.gov.uk/government/publications/office-for-artificial-intelligence-information-collection-and-analysis-privacy-notice/office-for-artificial-intelligence-information-collection-and-analysis-privacy-notice\" class=\"govuk-link\"\u003eOffice for Artificial Intelligence – information collection and analysis: privacy notice,\u003c/a\u003e Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:123\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003cli id=\"fn:124\" role=\"doc-endnote\"\u003e\n \u003cp\u003e\u003ca rel=\"external\" href=\"https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1147045/uk_ai_regulation_impact_assessment.pdf\" class=\"govuk-link\"\u003eUK Artificial Intelligence Regulation Impact Assessment\u003c/a\u003e, Department for Science, Innovation and Technology, 2023. \u003ca href=\"#fnref:124\" class=\"govuk-link\" role=\"doc-backlink\" aria-label=\"go to where this is referenced\"\u003e↩\u003c/a\u003e\u003c/p\u003e\n \u003c/li\u003e\n \u003c/ol\u003e\n\u003c/div\u003e" } } ] } </script><script type="application/ld+json"> { "@context": "http://schema.org", "@type": "BreadcrumbList", "itemListElement": [ { "@type": "ListItem", "position": 1, "item": { "name": "Home", "@id": "https://www.gov.uk/" } }, { "@type": "ListItem", "position": 2, "item": { "name": "Business and industry", "@id": "https://www.gov.uk/business-and-industry" } }, { "@type": "ListItem", "position": 3, "item": { "name": "Science and innovation", "@id": "https://www.gov.uk/business-and-industry/science-and-innovation" } }, { "@type": "ListItem", "position": 4, "item": { "name": "Artificial intelligence", "@id": "https://www.gov.uk/business-and-industry/artificial-intelligence" } }, { "@type": "ListItem", "position": 5, "item": { "name": "AI regulation: a pro-innovation approach – policy proposals", "@id": "https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals" } } ] } </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10