High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following areas:
1. Biometrics, in so far as their use is permitted under relevant Union or national law:
(a)remote biometric identification systems.
This shall not include AI systems intended to be used for biometric verification the sole purpose of which is to confirm that a specific natural person is the person he or she claims to be;
(b)AI systems intended to be used for biometric categorisation, according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics;
(c)AI systems intended to be used for emotion recognition.
2. Critical infrastructure: AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity.
3. Education and vocational training:
(a)AI systems intended to be used to determine access or admission or to assign natural persons to educational and vocational training institutions at all levels;
(b)AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels;
(c)AI systems intended to be used for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access, in the context of or within educational and vocational training institutions at all levels;
(d)AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the context of or within educational and vocational training institutions at all levels.
4. Employment, workers’ management and access to self-employment:
(a)AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates;
(b)AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships.
5. Access to and enjoyment of essential private services and essential public services and benefits:
(a)AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services, as well as to grant, reduce, revoke, or reclaim such benefits and services;
(b)AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud;
(c)AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance;
(d) AI systems intended to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services, including by police, firefighters and medical aid, as well as of emergency healthcare patient triage systems.
6. Law enforcement, in so far as their use is permitted under relevant Union or national law:
(a)AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies in support of law enforcement authorities or on their behalf to assess the risk of a natural person becoming the victim of criminal offences;
(b)AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices or agencies in support of law enforcement authorities as polygraphs or similar tools;
(c)AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies, in support of law enforcement authorities to evaluate the reliability of evidence in the course of the investigation or prosecution of criminal offences;
(d)AI systems intended to be used by law enforcement authorities or on their behalf or by Union institutions, bodies, offices or agencies in support of law enforcement authorities for assessing the risk of a natural person offending or re-offending not solely on the basis of the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680, or to assess personality traits and characteristics or past criminal behaviour of natural persons or groups;
(e)AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices or agencies in support of law enforcement authorities for the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of the detection, investigation or prosecution of criminal offences.
7. Migration, asylum and border control management, in so far as their use is permitted under relevant Union or national law:
(a)AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies as polygraphs or similar tools;
(b)AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies to assess a risk, including a security risk, a risk of irregular migration, or a health risk, posed by a natural person who intends to enter or who has entered into the territory of a Member State;
(c)AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies to assist competent public authorities for the examination of applications for asylum, visa or residence permits and for associated complaints with regard to the eligibility of the natural persons applying for a status, including related assessments of the reliability of evidence;
(d)AI systems intended to be used by or on behalf of competent public authorities, or by Union institutions, bodies, offices or agencies, in the context of migration, asylum or border control management, for the purpose of detecting, recognising or identifying natural persons, with the exception of the verification of travel documents.
8. Administration of justice and democratic processes:
(a) AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution;
(b) AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda. This does not include AI systems to the output of which natural persons are not directly exposed, such as tools used to organise, optimise or structure political campaigns from an administrative or logistical point of view.
|
本法第六条第2款项下的高风险人工智能系统是指被列入下列任一领域的人工智能系统:
1、生物测定技术,须相关欧盟或成员国法律允许使用:
不包括用于生物特征验证、唯一目的是确认特定自然人就是其所声称主体的人工智能系统;
(b)根据对敏感或受保护的属性或特征的推断进行生物特征分类的人工智能系统;
2、关键基础设施:用作关键数字基础设施、道路交通或供水、供气、供暖或供电的管理和运营中的安全组件的人工智能系统。
(a)用于确定各级教育和职业培训机构的准入或录取,或将自然人分配到该等机构的人工智能系统;
(b)用于评估学习成果的人工智能系统,包括将该等成果用于指导各级教育和职业培训机构中自然人的学习过程;
(c)用于评估一个人在各级教育和职业培训机构环境下或机构内部可以接受或能够获得的适当教育水平的人工智能系统;
(d)用于在各级教育和职业培训机构环境下或机构内部监测和检测学生在考试期间的违禁行为的人工智能系统。
(a)用于招聘或选拔自然人的人工智能系统,特别是发布有针对性的招聘广告、分析和过滤求职申请以及评估候选人;
(b)用于做出影响工作相关关系条款的决策,促进或终止工作相关合同关系,根据个人的行为或个性特征或特点分配任务,或监测和评估此类关系中人员的表现和行为的人工智能系统。
(a)由公权力机关或其代表用于评估自然人获得基本公共援助福利和服务(包括医疗服务)的资格,以及提供、减少、撤销或收回此类福利和服务的人工智能系统;
(b)用于评估自然人的信誉或确定其信用评分的人工智能系统,但用于检测金融欺诈的人工智能除外;
(c)用于人寿和健康保险中与自然人的风险评估和定价的人工智能系统;
(d)用于对自然人的紧急呼叫进行评估和分类,或用于调度或在紧急第一反应服务(包括警察、消防员和医疗援助)和紧急医疗患者分流系统的调度中确定优先级的人工智能系统。
(a) 执法机关或其代表(或欧盟各机构为支持执法机关或其代表的执法活动而使用的),用于评估自然人成为刑事犯罪受害者的风险的人工智能系统;
(b)由执法机关或其代表(或欧盟各机构为支持执法机关或其代表的执法活动)作为测谎仪或类似工具使用的人工智能系统;
(c)执法机关或其代表(或欧盟各机构为支持执法机关或其代表的执法活动)用于支持执法机关在调查或公诉的过程中评估证据的可靠性的人工智能系统;
(d)执法机关或其代表(或欧盟各机构为支持执法机关或其代表的执法活动)用于支持执法机关评估自然人犯罪或再次犯罪的风险的人工智能系统,不只是基于第2016/680号指令第3条第(4)款所述的自然人特征或评估自然人或群体的人格特征或过去的犯罪行为;
(e)执法机关或其代表(或欧盟各机构为支持执法机关或其代表的执法活动)用于支持执法机关在侦查、调查或公诉过程中对第2016/680号指令第3(4)条所述的自然人进行特征分析的人工智能系统。
7、移民、庇护和边境管制管理,须相关欧盟或成员国法律允许使用:
(a)主管公权力机关或其代表或由联盟各机构用作测谎仪或类似工具的人工智能系统;
(b)主管公权力机关或其代表或欧盟各机构用于评估拟进入或已经进入成员国领土的自然人构成的风险(包括安全风险、非正常移民风险或健康风险)的人工智能系统;
(c)主管公权力机关或其代表或由欧盟各机构用于协助主管机关审查庇护、签证或居留许可申请,以及关于申请身份的自然人资格的相关投诉(包括对证据可靠性的相关评估)的人工智能系统;
(d)主管公权力机关或其代表或欧盟各机构在移民、庇护或边境管制管理方面用于检测、识别或确认自然人(旅行证件核查除外)的人工智能系统。
(a)司法机关或其代表用于协助司法机关研究和解释事实和法律,并将法律应用于具体事实或以类似方式用于替代性争议解决的人工智能系统;
(b)用于影响选举或公民投票结果或自然人在选举或全民公投中的投票行为的人工智能系统。不包括从行政或后勤角度组织、优化或构建政治活动的工具等自然人不直接接触的人工智能系统。
|