申请实习证 两公律师转社会律师申请 注销人员证明申请入口 结业人员实习鉴定表申请入口 网上投稿 《上海律师》 ENGLISH
当前位置: 首页 >> 业务研究 >> 专业委员会 >> 国际法专业委员会 >> 专业论文

大变局?揭秘全球AI治理立法趋势

    日期:2025-12-12     作者:韩海娇(国际法专业委员会、北京炜衡(上海)律师事务所)、黄一埔(北京炜衡(上海)律师事务所)

The launch of ChatGPT has underscored the increasingly greater participation of Generative Artificial Intelligence (“Generative AI” or “AI”) in our daily lives and the consequential necessity of properly regulating the development and application of Generative AI. Trained on large volume of data, Generative AI, tasked to generate answers in response to prompt by the production of text, photos, and audios, is capable of resolving complex tasks and promoting productivity and innovations across sectors. However, AI has also resulted challenges and risks.

ChatGPT的推出凸显了生成式人工智能(“生成式人工智能”或“人工智能”)在我们日常生活中的日益广泛参与,因而规范生成式人工智能的发展和应用显得日益迫在眉睫。生成式人工智能通过对大量数据进行训练,被赋予以文本、图片和音频等方式生成答案的任务,能够解决复杂问题并推动跨行业的生产力和创新,但挑战和风险也随之而来。

I The AI associated risks: why AI must be regulated?

人工智能相关风险:为什么需规范人工智能?

Firstly, AI may undermine privacy by inadvertently and excessively collecting private information and the collected data often exceed what is necessary for the intended uses, which could lead to unintended exposure or misuse Secondly, AI may lead to the spread of misinformation and disinformation. For instance, AI can generate fabricated references to non-existent resources or false contents with a high level of persuasiveness due to the incapability of verification of the contents generated based on training data. Thirdly, AI poses a risk of bias and discrimination. Consequential upon the training on massive volume of biased data containing discriminatory stereotypes that reflect systematic inequalities of the dominant cultures, AI could result in discriminatory treatments against certain marginalised societal groups based on their social backgrounds. Lastly, AI may cause threats to public safety and security. AI systems may be misused to generate inappropriate contents such as pornography, violence, or even incitation of suicide or acts of self-injury, thereby causing risks to public safety. Concurrently, AI could be maliciously utilized for illegal or terrorist activities.

首先,人工智能可能通过无意或过度收集个人信息,且收集的数据往往超出用于预期目的所需范围,可能导致意外的信息曝光或滥用。其次,人工智能可能导致错误信息和虚假信息的传播。比如,由于生成内容的验证能力不足,人工智能可能生成说服力虽高但却指向不存在资源或虚假内容的虚构引用。第三,人工智能存在偏见和歧视的风险。由于对包含反映主导文化系统性不平等的歧视性刻板印象的海量偏倚数据进行训练,人工智能可能根据社会背景对某些边缘化社会群体进行歧视性处理。最后,人工智能可能对公共安全和安全性构成威胁。人工智能可能被滥用以生成不当内容,如色情、暴力,甚至煽动自杀或自伤行为,从而对公共安全构成风险。与此同时,人工智能也可能被恶意利用进行非法或恐怖活动。

II The Global Trend of AI Governance

人工智能治理之全球趋势

At the international level, several international organizations have agreed on numerous initiatives for AI regulation in attempt to regulate AI through developers’ voluntary alignment with certain principles, such as the Bletchley Declaration, Hiroshima AI Principles, and the OECD AI Principles. Notably, the Council of Europe is drafting the world's first AI convention, which, upon entry into force, would oblige contracting states to enact legislation to mandate risk management measures for AI development. These initiatives have substantially reached a consensus on the principles that should govern AI development and application, which include, among others, transparency, fairness, safety, and privacy protection. Under these principles, AI providers must ensure sufficient transparency during the operations of AI system by providing all relevant information in an accessible, clear, accurate, and timely manner, enabling the users to understand on the general functionality, level of accuracy, the associated risks, and the corresponding risk mitigation measures of the AI. The development of AI systems must maintain an adequate level of fairness and respect for rule of law, democratic principles, and human rights. Necessary measures should be adopted to prevent algorithmic discrimination such as equity assessment, utilization of diverse and representative data, safeguarding against demographic features proxies, ensuring accessibility for disabled people, evaluating disparity, and maintaining appropriate human supervision. The developers must conduct the pre-market risk management assessment to identify and mitigate the AI associated risk, adopt cybersecurity measures to ensure the system robustness and stability, and continuously monitor the post-market compliance with the standards. More importantly, regulation on private data protection must be enacted conjunctively to guarantee that the data collection and processing by AI are only permitted under user consent and only to the extent necessary for the intended purposes. However, the unenforceability of these initiatives has rendered the AI substantially unregulated de facto, thereby necessitating the enactment of binding regulation for AI governance.

在国际层面上,多个国际组织已就人工智能监管达成多项倡议,试图通过开发者自愿遵循特定原则来规范,比如 Bletchley宣言、广岛人工智能原则和OECD 人工智能原则。值得注意的是,欧洲理事会正在起草全球首个人工智能公约,一旦生效,将要求缔约国制定立法,强制规定人工智能发展的风险管理措施。这些倡议在管理人工智能发展和应用的原则上达成了实质性共识,其中包括透明度、公平性、安全性和隐私保护等。根据这些原则,人工智能提供者必须确保在人工智能系统运行期间提供所有相关信息,以便用户以易于获取、清晰、准确和及时的方式了解人工智能的一般功能、准确度水平、相关风险和相应的风险缓解措施。人工智能系统的开发必须保持适当的公平性和遵守法治、民主原则和人权,必须采取必要措施防止算法歧视,如公平评估、利用多样化和具有代表性的数据、防止使用人口统计特征代理、确保残障人士的可访问性、评估差距并保持适当的人类监督。开发者必须进行市场前风险管理评估,以识别和减轻人工智能相关风险,采取网络安全措施确保系统的稳健性和稳定性,并持续监测市场后符合标准的情况。更重要的是,必须同时制定关于私人数据保护的法规,以确保人工智能对数据的收集和处理仅在用户同意的情况下,且仅限于预期目的所需范围。然而,这些倡议的难以实施使得人工智能在实质上几乎未受到规范,因此需要制定对人工智能治理具有约束力的法规。

At the domestic level, many jurisdictions have proposed different approaches on AI regulation. Among these, the UK and US have opted to regulate AI within the scope of existing legislation. Pursuant to the UK AI white paper, regulators will be directed to exercise the powers delegated under the existing legislation to issue guidance obliging AI developers to comply with specified principles. Similarly, President Biden has signed an Executive Order mandating developers to conduct safety tests and report the results to the Federal Government and order relevant authorities to issue standards and guidance to monitor AI development’s compliance with the principles specified under the US AI Bill of Rights . However, this approach is apparently flawed due to the inability of existing regulations on addressing specific risks of AI. For instance, while the recent Online Safety Act in the UK could partially ensure the safety of certain internet services by restricting illegal activities and the production of harmful content, these protections are not directly applicable to AI unless the AI is deployed within specified internet services. Conversely, Canada and EU have opted to enact specialized AI legislation. While a Private Members’ Bill for AI regulation has also been introduced to the UK House of Lords by Conservative Lord Holmes of Richmond, this Bill is overly simplified and substantially resembles the current approach of UK Government due to its reliance on relevant authority to enact delegated legislations to regulate AI in accordance with specified principles. More importantly, this Bill is highly unlikely to proceed due to lack of support from the incumbent Conservative government.

在国内层面上,许多司法管辖区已提出了不同的人工智能监管方法。其中,英国和美国选择在现有立法范围内对人工智能进行监管。根据英国的人工智能白皮书,监管机构将被指示行使在现有立法下委派的权力,发布指南,要求人工智能开发者遵守特定原则。同样,拜登总统签署了一项行政命令,要求开发者进行安全测试并向联邦政府报告结果,并命令相关机构发布监控人工智能开发符合美国人工智能权利法案指定原则的标准和指南。然而,这种方法显然存在缺陷,因为现有法规无法解决人工智能的特定风险。比如,尽管最近英国的网络安全法案可部分确保某些互联网服务的安全,限制非法活动和有害内容的制作,但除非人工智能部署在特定互联网服务中,否则这些保护措施并不直接适用于人工智能。相反,加拿大和欧盟选择制定专门的人工智能立法。尽管保守党Richmond的Holmes勋爵向英国上议院提出了一项人工智能监管的私人议员法案,但由于该法案过于简化,且实质上类似于英国政府目前的监管办法,因此此法案很可能不会获得支持。更重要的是,由于缺乏现任保守党政府的支持,这项法案通过的可能性极低。

III The EU AI Act: Benchmark for Global AI Legislation?

欧盟人工智能法案:全球人工智能立法标杆?

The EU is known for its strong stance on digital and data regulation, and it has enacted several key regulations such as GDPR and DSA relevant to AI regulation. The GDPR, as the strictest regulation for the protection of personal information in the world, governs the collection, processing, and transfer of private data across the EU. Under Article 5 of the GDPR, private data can only be collected to the extent necessary for a legitimate purpose, with an appropriate level of accuracy and security, in a lawful, fair, and transparent manner. The lawfulness of data processing is contingent upon the satisfaction of at least one of the purposes specified under Article 6 of GDPR, including obtaining informed consent. Articles 12 to 22 of the GDPR protect the rights of data subjects associated with data processing. Conversely, the DSA, similar to the UK Online Safety Act, ensures the safety of digital services by restricting illegal activities, disinformation, and the production of harmful contents. While the GDPR and DSA could partially address the risks to privacy and safety posed by AI, these regulations face similar issues to the UK Online Safety Act, that is their inapplicability in certain uses of Generative AI and the consequential inability to address specific risks associated with AI. Consequently, the enactment of specialized legislation to regulate AI becomes increasingly imminent.

欧盟以其对数字和数据监管的坚定立场而闻名,并颁布了诸如通用数据保护条例GDPR和数字服务法DSA等与人工智能监管相关的关键法规。GDPR作为全球最严格的个人信息保护法规,管理着欧盟范围内私人数据的收集、处理和转移。根据GDPR第5条,私人数据只能在合法、公正、透明的方式下,仅收集到达到合理目的所需范围,具有适当的准确性和安全性。数据处理的合法性取决于至少满足GDPR第6条规定的目的之一,包括获得知情同意。GDPR第12条至第22条保护与数据处理相关的数据主体的权利。相反,DSA类似于英国的网络安全法案,通过限制非法活动、虚假信息和有害内容的制作,确保数字服务的安全性。虽然GDPR和DSA在一定程度上可应对人工智能带来的隐私和安全风险,但这些法规面临着与英国的网络安全法案类似的问题,即在某些生成式人工智能的使用方面不适用,从而无法解决与人工智能相关的特定风险。因此,制定专门的法规以规范人工智能变得日益迫切。

In response to this, the European Commission, on 21st April 2021, published a legislative proposal for a Regulation intending to establish harmonised rules for AI regulation with direct applicability across the EU, which, upon its entry into force, would be the world’s first specialised legislation for AI regulation. The European Commission proposed to regulate Generative AI through a tiered and risk-based approach to ensure that regulated AI systems are subject to rules that are proportional to their associated risks. AI systems are categorised into four classes of risk based on their intended uses, with each class subject to different regulatory obligations. On 8th December 2023, the EU AI Act has received its final approval and will enter into force in early 2024. The final version of EU AI Act retained the tiered and risk-based approach proposed by the European Commission.

作为回应,欧洲委员会于2021年4月21日发布了一项立法建议,旨在建立统一的人工智能监管规则,该规则在欧盟范围内直接适用,一旦生效将成为全球首个专门针对人工智能监管的立法。欧洲委员会建议通过分层和基于风险的方式来规范生成式人工智能,以确保受监管的人工智能系统受到与其相关风险相称的规则约束。根据其预期用途,人工智能系统分为四类风险,并对每一类都施加不同的监管义务。2023年12月8日,欧盟人工智能法案获得最终批准,并将于2024年初生效。欧盟人工智能法案最终版本保留了欧洲委员会提出的分层和基于风险的方法。

1) Unacceptable-risk AI: Prohibition

不可接受风险的人工智能:禁止

The European Commission proposed to prohibit certain AI applications that pose unacceptable risks such as behavioral distortion or manipulation, biometric categorization, social scoring by public authorities, and biometric identification by law enforcement unless necessary for crime prevention. The European Parliament subsequently proposed to expand the scope of prohibition to include any deceptive techniques that may undermine users’ ability to make informed decisions, and AI applications for social scoring, emotion inference, and all biometric identification practices. It is confirmed that the final approved version has extended the list of prohibited “unacceptable risk AI” to encompass the amendments adopted by the European Parliament, with the exception allowing law enforcement to apply remote biometric identification under appropriate safeguards.

欧洲委员会提议禁止某些人工智能应用,这些应用存在不可接受的风险,如行为扭曲或操纵、生物特征分类、公共机构进行社会评分,以及执法机构进行生物特征识别,除非为了犯罪预防。随后,欧洲议会提议扩大禁止范围,包括任何可能削弱用户做出知情决策能力的欺骗性技术,以及用于社会评分、情绪推断和所有生物特征识别实践的人工智能应用。确认最终通过的版本已经扩展了被禁止的“不可接受的风险”列表,涵盖了欧洲议会所采纳的修改,但执法机构在适当保障下使用远程生物特征识别除外。

2) High-risk AI: Pre-market conformity assessment and post-market monitoring

高风险的人工智能:市场前符合评估和市场后监测

The second class of AI application, termed the “high-risk AI”, is subject to detailed conformity assessment and post-market monitoring requirements instead of prohibition. Under the initial proposal, the AI applications specified under Annex III are classified as “High-Risk”, such as critical infrastructure management, educational training, recruitment & employee management, critical private or public services, migration control, administration of justice, and certain law enforcement systems. The providers of high-risk AI are subject to numerous obligations, including conducting conformity assessment to ensure the compliance with the requirements specified under Title III Chapter 2 of EU AI Act.

第二类人工智能应用被称为“高风险”而非被禁止,需受到详细的市场前符合评估和市场后监测要求的约束。根据最初的提案,列入附件III的人工智能应用被分类为“高风险”,如关键基础设施管理、教育培训、招聘和员工管理、关键的私人或公共服务、移民控制、司法管理和某些执法系统。高风险人工智能的提供者需履行诸多义务,包括进行符合评估,以确保符合欧盟人工智能法案第三章第二节规定的要求。

3) Transparency obligations for limited risk AI

低风险的人工智能之透明度义务

Certain limited-risk AI capable of generating or modifying images, audio, or video content must ensure sufficient transparency by notifying users that the contents are AI-generated. Limited-risk AI may include deepfakes or chatbots. The European Parliament further proposed to oblige the providers of limited-risk AI to disclose the functionality of the AI systems, the identity of the provider, and availability of human oversight to the users.

对于低风险的人工智能,能够生成或修改图像、音频或视频内容,必须确保充分的透明度,即告知用户这些内容是由人工智能生成。低风险的人工智能可能包括 deepfake 或聊天机器人。欧洲议会进一步提议,要求低风险的人工智能提供者披露人工智能系统的功能、提供者的身份以及用户是否可获得人类监督。

4) Voluntary code of conduct for minimal-risk AI

最低风险的人工智能之自愿行为准则

The providers of minimal-risk AI are encouraged to develop Code of Conduct and voluntarily align with the conformity assessment requirements specified Title III Chapter 2 of EU AI Act.

鼓励最低风险的人工智能提供者制定行为准则,并自愿遵守欧盟人工智能法案第三章第二节规定的符合评估要求。

5) Governance

治理

Similar to the European Data Protection Board established under GDPR, the European Commission proposed a European Artificial Intelligence Board (“AI Board”) to provide issue recommendations on technical specification, standards, or the implementation of EU AI Act. This body is proposed to be comprised of the relevant authorities from the member states and the European Data Protection Supervisor. While the Council of EU supports this composition, the European Parliament has proposed an alternative: a fully independent AI governance body named “AI Office”. It has been confirmed that both proposed entities are retained by the final version of EU AI Act, with AI Office serving as an enforcement body and AI Board functioning as an advisory body.

类似于GDPR项下建立的欧洲数据保护委员会,欧洲委员会提议设立一个欧洲人工智能委员会(“人工智能委员会”),以就技术规范、标准或欧盟人工智能法案的实施发布建议。这一机构拟由成员国的相关权威机构和欧洲数据保护监督员组成。虽然欧盟理事会支持这种构成,但欧洲议会提出了一种替代方案:一个完全独立的人工智能治理机构,名为“人工智能办公室”。已确认欧盟人工智能法案的最终版本将保留这两个提议的实体,其中人工智能办公室作为执法机构,而人工智能委员会则作为咨询机构。

6) Regulating foundation models and general-purposes AI: limitation on innovation and competitiveness?

规范基础模型和通用型人工智能:对创新和竞争力的限制?

One significant concern of the initial proposal is its failure to account for AI systems designed for a generality of outputs to serve various applications either through direct use or incorporation into other AI systems. Such AI systems, commonly referred to as “foundation models” or “general-purpose AI”, cannot be classified into any of the risk tiers due to the absence of a specific intended use, thereby rendering them substantially unregulated under the initial proposal. To address this issue, the Council of EU has proposed a new Title IA specifically designed to regulate general-purposes AI that may be used for high-risk purposes to comply with the conformity assessment requirements. Conversely, the European Parliament proposed a new Article 28b imposing horizontal obligations on all foundation models, including adopting risk management system, training on appropriately governed datasets to avoid bias and discrimination, and adherence to the transparency obligations under Article 52 of the EU AI Act.

最初提案的一个重要问题是它未能考虑到为多种应用提供服务的、通过直接使用或并入其他人工智能系统的通用输出的人工智能系统。这种人工智能系统通常被称为“基础模型”或“通用型人工智能”,由于缺乏特定的预期用途,无法被分类为任何风险等级,因此在最初的提案下几乎未能受到实质性的监管。为解决该问题,欧盟理事会提出了一个新的人工智能章节,专门用于规范可能被用于高风险目的的通用型人工智能以符合符合评估要求。相反,欧洲议会提出了一项新的28b条款,对所有基础模型施加了横向义务,包括采用风险管理系统、在受到适当监督的数据集上进行训练以避免偏见和歧视,并遵守欧盟人工智能法案第52条透明度义务。

Following the negotiations, EU legislators initially rejected horizonal rules and agreed on a similar tiered approach on 24th October 2023 to regulate foundation models based on their level of risks. However, on 18th November 2023, three major economies in the EU - Germany, France, and Italy - opted against binding regulation on foundation models citing concerns over potential deterrent effect on innovation and competition, and jointly supported self-regulation through code of conduct. Despite this controversy is resolved by the final version of the EU AI Act’s introduction of horizontal transparency obligations for foundation models and stricter rules for “high-impact” foundation models, a potential disadvantage of overly stringent AI regulation has been highlighted: the additional compliance costs may undermine the competitiveness of AI sector.

在谈判过程中,欧盟立法者最初拒绝横向规则,并于2023年10月24日同意采取类似的分层方法,根据基础模型风险级别进行规范。然而,2023年11月18日,欧盟的三个主要经济体德国、法国和意大利决定反对对基础模型进行约束性的规范,理由是担心可能对创新和竞争力产生限制效应,并联合支持通过行为准则进行自我规范。尽管欧盟人工智能法案的最终版本通过引入针对基础模型的横向透明度义务和对“高影响力”基础模型制定了更严格的规则解决了这一争议,但也突显出了过度严格的人工智能监管的一个潜在劣势:额外的合规成本可能削弱人工智能行业的竞争力。

7) Final approved version of EU AI Act and the material modifications.

欧盟人工智能法案最终批准版本及实质性修改

On 8th December 2023, the EU legislators have approved the final compromise text of the EU AI Act, which, despite the substantial consistency with the initial proposal, has adopted some material modifications including the extension of the list of prohibited AI practices (biometric identification, emotion inference, social scoring, behavioural manipulation), horizontal transparency obligations for foundation models and stricter rules for “high-impact” foundation models/general-purpose AI, and retention of “AI Office” proposed by European Parliament as a supplement to “AI Board” proposed by the European Commission. In addition, the final version of EU AI Act amended the initial proposal to oblige certain public entities to register the applications of high-risk AI systems with regulators. Following this provisional agreement, the EU AI Act will be finalised promptly and enter into force in early 2024.

2023年12月8日,欧盟立法者批准了欧盟人工智能法案最终妥协文本,尽管与最初提案在很大程度上保持一致,但采纳了一些实质性修改,包括扩展了禁止的人工智能实践列表(生物特征识别、情绪推断、社会评分、行为操纵)、针对基础模型的横向透明度义务以及更严格的规则适用于“高影响力”基础模型/通用型人工智能,并保留了欧洲议会提出的“人工智能办公室”作为欧洲委员会提出的“人工智能委员会”之补充。此外,欧盟人工智能法案最终版本修正了最初提案,要求某些公共机构向监管机构注册高风险的人工智能系统应用。在达成这项临时协议后,欧盟人工智能法案将尽快完成起草工作,并于2024年初生效。

8) The ‘Brussel Effect’ and the EU AI Act’s potential influence on global AI governance

“布鲁塞尔效应”及欧盟人工智能法案对全球人工智能治理的潜在影响

The ‘Brussels Effect’ is generally referred to the global applicability of EU’s regulations and standards. By leveraging its large market size, the EU often adopt high standards in various areas including digital technologies and data privacy. When multinational corporations operate in the EU market, applying the highest standard globally is generally more practical than maintaining various standards in different regions due to the high cost of differentiation. As a notable example is the GDPR, which applies to overseas entities that collect EU citizens’ data, thereby forcing major multinational corporations especially tech giants to adhere to the GDPR globally. This worldwide adoption of GDPR has also encouraged other regions to enact similar legislation, such as the Personal Information Protection Law of China and California Consumer Privacy Act. Similarly, the EU’s Common Charger Directive which obliges all electronic devices sold in EU to adopt USB-C chargers have forced Apple to completely abandon its own Lightning charger for iPhone. As the EU AI Act, upon its entry into force, is set to become the strictest AI regulation globally, a similar Brussels Effect is likely to occur, forcing AI systems such as ChatGPT or Bard, which operate globally, to universally apply EU AI Act. This global applicability could render the EU AI Act as de facto international standard for AI governance and substantially influence the future Australian AI regulations.

“布鲁塞尔效应“通常是指欧盟法规和标准的全球适用性。欧盟借助其庞大的市场规模,在包括数字技术和数据隐私在内的各个领域通常采用高标准。当跨国公司在欧盟市场开展业务时,在全球范围内应用最高标准通常比在不同地区维持多种标准更为实际,因为区分化的成本较高。一个显著的例子是GDPR,适用于收集欧盟公民数据的海外实体,因此迫使主要跨国公司尤其是科技巨头全球遵守GDPR。GDPR的全球采纳也鼓励其他地区出台类似立法,如中国《个人信息保护法》和美国加州消费者隐私法案。同样,欧盟通用充电器指令要求在欧盟销售的所有电子设备都采用USB-C充电器,迫使苹果公司完全放弃了其自身的Lightning充电器用于iPhone。随着欧盟人工智能法案的生效,预计将成为全球最严格的人工智能监管,类似的“布鲁塞尔效应”可能发生,迫使全球运营的人工智能系统,如ChatGPT或Bard,在全球范围内普遍遵守欧盟人工智能法案。这种全球适用性可能将欧盟人工智能法案视为事实上的国际人工智能治理标准,并对未来的澳大利亚人工智能法规产生重大影响。

IV Canadian Artificial Intelligence and Data Act (“AIDA”):

more suitable approach for Australia AI legislation?

加拿大人工智能和数据法案(AIDA):对澳大利亚人工智能立法更合适的方法?

In June 2022, the Canadian Government introduced the AIDA to the Canadian House of Commons. Under Sections 6 to 12 of this Bill, high-impact AI providers are subject to the self-assessment obligations of adopting mandatory risk mitigation measures, keeping records, providing notifications to the users about the intended uses, types of generated contents, and risk mitigation measures. The providers must report any potential “material harm” of the high-impact AI The responsible Minister may inspect records, order a mandatory audit, or even prohibit the deployment of a specific AI system, if there is a reasonable belief that the AI may produce harmful or “biased output”, infringe Sections 6 to 12, or cause imminent harm.

2022年6月,加拿大政府向加拿大下议院提出了人工智能与数据法案AIDA。根据该法案第6至12节,高影响力的人工智能提供者需承担自我评估义务,采用强制性风险缓解措施,保存记录,并向用户提供关于预期用途、生成内容类型和风险缓解措施的通知。提供者必须报告高影响力人工智能的任何潜在“重大损害”。如有合理理由相信人工智能可能产生有害或“偏见输出”、侵犯第6至12节或造成即将发生的损害,负责的部长可检查记录,下令进行强制审计,甚至禁止特定人工智能系统的部署。

One significant issue of AIDA is that the obligations for high-impact AI under AIDA are less comprehensive than those for high-risk AI under the EU AI Act. Additionally, compliance under AIDA is ensured by self-assessment rather than conformity assessment by an authorized body. Nevertheless, the AIDA model could be a more suitable approach for future Australian AI legislations for several reasons. Unlike the rigid tiered approach under the EU AI Act, AIDA provides the Canadian Government broad discretion on the enforcement by defining key terms such as “biased outputs”, “high-impact AI”, and “material harm”, and establishing risk mitigation measures and penalties. The Minister may order mandatory audit and prohibit the deployment of a specific AI system based on its potential risks. Without burdensome parliamentary scrutiny, this regulatory flexibility could enable continuous evaluation of AI associated risks and accelerate the decision-making process to develop suitable standards for diverse AI applications in a timely manner. Furthermore, unlike EU AI Act which limits penalties to administrative fines, the AIDA imposes criminal liabilities as penalties for severe infringement, potentially ensuring a higher level of compliance by stronger deterrent. This approach could offer the level of standards comparable to the conformity assessment under the EU AI Act, but with potentially reduced compliance costs due to its reliance on self-assessment.

AIDA的一个重要问题在于,其对高影响力人工智能的义务比欧盟人工智能法案对高风险人工智能的要求不够全面。此外,AIDA项下合规性是通过自我评估而非由授权机构进行符合评估来确保的。然而,出于几个原因,AIDA模式可能是未来澳大利亚人工智能立法的更合适方法。与欧盟人工智能法案项下严格分层方法不同,AIDA通过定义关键术语如“偏见输出”、“高影响力人工智能”和“重大损害”,并设立风险缓解措施和处罚,赋予了加拿大政府广泛的执行裁量权。部长可基于人工智能的潜在风险下令进行强制审计并禁止特定人工智能系统的部署。在无繁琐的议会审查情况下,这种监管灵活性可持续评估人工智能相关风险,并加快制定适用于多种人工智能应用的合适标准的决策过程。此外,与限制处罚为行政罚款的欧盟人工智能法案不同,AIDA将刑事责任作为严重违规的处罚,可能通过更强有力的威慑确保更高水平的合规性。这种方法可能提供与欧盟人工智能法案项下符合评估相媲美的标准水平,但由于依赖自我评估,合规成本可能会降低。

V Bibliography

参考文献

1) Legislation/立法

CA Civ Code § 1798.100 (2018).

Directive (EU) 2022/2380 of the European Parliament and of the Council of 23 November 2022 amending Directive 2014/53/EU on the harmonization of the laws of the Member States relating to the making available on the market of radio equipment.

Online Safety Act 2023 (UK).

Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC.

Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC.

《中华人民共和国个人信息保护法》[Personal Information Protection Law of the People’s Republic of China] (People’s Republic of China), National People’s Congress, Order No.91/2021, 20th August 2021.

2) Other Legislative Materials/其他立法文件

Artificial Intelligence (Regulation) HL Bill (2023-24) 11 (UK).

Bill C-27, Digital Charter Implementation Act, 1st Sess, 44th Parl, 2022, pt 3 (Canada) .

European Union, European Commission, ‘Proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts’ COM (2021) 206 final, 21 April 2021 .

European Union, Council of the European Union, ‘General approach adopted by the Council of the European Union on 25 November 2022 on the proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative’, 25 November 2022 .

European Union, European Parliament, ‘Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts’, 14 June 2023 .

3) Draft treaty/草案条约

Council of Europe, Committee on Artificial Intelligence, ‘Consolidated working draft of the framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law’ CAI (2023)18, 7 July 2023 .

4) Secondary Resources/其他资源

Anu Bradford, ‘The Brussels Effect’ (2012) 107(1) Northwestern University Law Review.

AI Safety Summit, ‘The Bletchley Declaration by Countries Attending the AI Safety Summit 1-2 November 2023’ (1 November 2023) .

BBC, ‘ChatGPT banned in Italy over privacy concerns’ (Web Page, 1 April 2023) .

Council of the European Union, ‘Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world’ (Web Page, 9 December 2023) .

Charlotte Siegmann and Markus Anderljung, ‘The Brussels Effect and Artificial Intelligence: How EU regulation will impact the global AI market’ (Cambridge University Press, 2022).

Department for Science, Innovation and Technology (UK), ‘A pro-innovation approach to AI regulation’ (2023) .

Department for Science, Innovation and Technology (UK), ‘Capabilities and risks from frontier AI: A discussion paper on the need for further research into AI risk’ (2023) .

Lilian Edwards, ‘Expert explainer: The EU AI Act proposal’, Ada Lovelace Institute (Web Page, 8 April 2022) .

Organisation for Economic Co-operation and Development, ‘What are the OECD Principles on AI?’ (2020) .

Science, Innovation and Technology Committee, Parliament of the United Kingdom, ‘The governance of artificial intelligence: interim report’ (Ninth Report of Session 2022-23, 31 August 2023) .

The Whitehouse, ‘Blueprint for an AI Bill of Rights: making automated systems work for the American people’ (2022) .

The Whitehouse, ‘Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’ (Web Page, 30 October 2023) .

The Group of Seven, ‘Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI system’ (30 October 2023) .

The Group of Seven, ‘Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI system’ (30 October 2023) .

Tech Policy Press, ‘Will Disagreement Over Foundation Models Put the EU AI Act at Risk?’ (Web Page, 30 November 2023) .

Reuters, ‘Exclusive: Germany, France and Italy reach agreement on future AI regulation’ (Web Page, 21 November 2023) .



[版权声明] 沪ICP备17030485号-1 

沪公网安备 31010402007129号

技术服务:上海同道信息技术有限公司   

     技术电话:400-052-9602(9:00-11:30,13:30-17:30)

 技术支持邮箱 :12345@homolo.com

上海市律师协会版权所有 ©2017-2024