论文部分内容阅读
Algorithms are increasingly shaping children’s lives, but new guardrails could prevent them from getting hurt. 算法不斷影响孩子们的生活,新的防护措施能保护他们免受伤害。
Algorithms can change the course of children’s lives. Kids are interacting with Alexas1 that can record their voice data and influence their speech and social development. They’re binging on videos on TikTok and YouTube pushed to them by recommendation systems that end up shaping their worldviews.
Algorithms are also increasingly used to determine what their education is like, whether they’ll receive health care, and even whether their parents are deemed fit to care for them. Sometimes this can have devastating effects: this past summer, for example, thousands of students lost their university admissions after algorithms—used in lieu of pandemic-canceled standardized tests—inaccurately predicted their academic performance2.
Children, in other words, are often at the forefront when it comes to using and being used by AI, and that can leave them in a position to get hurt. “Because they are developing intellectually and emotionally and physically, they are very shapeable,” says Steve Vosloo, a policy specialist for digital connectivity at UNICEF, the United Nations Children Fund.
Vosloo led the drafting of a new set of guidelines from UNICEF designed to help governments and companies develop AI policies that consider children’s needs. Released on September 16, the nine new guidelines are the culmination of several consultations held with policymakers, child development researchers, AI practitioners, and kids around the world. They also take into consideration the UN Convention on the Rights of the Child, a human rights treaty ratified in 1989.
The guidelines aren’t meant to be yet another set of AI principles, many of which already say the same things. In January of this year, a Harvard Berkman Klein Center review of 36 of the most prominent documents guiding national and company AI strategies found eight common themes—among them privacy, safety, fairness, and explainability.
Rather, the UNICEF guidelines are meant to complement these existing themes and tailor them to children. For example, AI systems shouldn’t just be explainable—they should be explainable to kids. They should also consider children’s unique developmental needs. “Children have additional rights to adults,” Vosloo says. They’re also estimated to account for at least one-third of online users. “We’re not talking about a minority group here,” he points out. In addition to mitigating AI harms, the goal of the principles is to encourage the development of AI systems that could improve children’s growth and well-being. If they’re designed well, for example, AI-based learning tools have been shown to improve children’s critical-thinking and problem-solving skills, and they can be useful for kids with learning disabilities. Emotional AI assistants, though relatively nascent, could provide mental-health support and have been demonstrated to improve the social skills of autistic children. Face recognition, used with careful limitations, could help identify children who’ve been kidnapped or trafficked.
Children should also be educated about AI and encouraged to participate in its development. It isn’t just about protecting them, Vosloo says. It’s about empowering them and giving them the agency to shape their future.
UNICEF isn’t the only one thinking about the issue. The day before those draft guidelines came out, the Beijing Academy of Artificial Intelligence (BAAI), an organization backed by the Chinese Ministry of Science and Technology and the Beijing municipal government, released a set of AI principles for children too.
The announcement follows a year after BAAI released the Beijing AI principles, understood to be the guiding values for China’s national AI development. The new principles outlined specifically for children are meant to be “a concrete implementation” of the more general ones, says Yi Zeng, the director of the AI Ethics and Sustainable Development Research Center at BAAI who led their drafting. They closely align with UNICEF’s guidelines, also touching on privacy, fairness, explainability, and child well-being, though some of the details are more specific to China’s concerns. A guideline to improve children’s physical health, for example, includes using AI to help tackle environmental pollution.
While the two efforts are not formally related, the timing is also not coincidental. After a flood of AI principles in the last few years, both lead drafters say creating more tailored guidelines for children was a logical next step. “Talking about disadvantaged groups, of course children are the most disadvantaged ones,” Zeng says. “This is why we really need [to give] special care to this group of people.” The teams conferred with one another as they drafted their respective documents. When UNICEF held a consultation workshop in East Asia, Zeng attended as a speaker. UNICEF now plans to run a series of pilot programs with various partner countries to observe how practical and effective their guidelines are in different contexts. BAAI has formed a working group with representatives from some of the largest companies driving the country’s national AI strategy, including education technology company TAL, consumer electronics company Xiaomi, computer vision company Megvii, and internet giant Baidu. The hope is to get them to start heeding the principles in their products and influence other companies and organizations to do the same.
Both Vosloo and Zeng hope that by articulating the unique concerns AI poses for children, the guidelines will raise awareness of these issues. “We come into this with eyes wide open,” Vosloo says. “We understand this is kind of new territory for many governments and companies. So if over time we see more examples of children being included in the AI or policy development cycle, more care around how their data is collected and analyzed—if we see AI made more explainable to children or to their caregivers—that would be a win for us.”
算法能改變孩子们的人生轨迹。现在,孩子们和亚马逊的智能语音助手Alexa互动,Alexa能记录他们的声音数据,影响他们的言语发展和社会性发展。他们沉溺于在抖音国际版和优兔上观看推荐系统推送给他们的视频,这些视频最终会影响他们的世界观。
算法也越来越多地用于决定儿童所受的教育,以及他们能否获得医疗服务,甚至判定父母是否适合抚养他们。有时这会导致极坏的结果:比如,2020年夏季,成千上万名学生因为算法——用于代替因疫情而取消的标准化测试——对其学习成绩作出不准确的预测而失去大学入学资格。
换言之,儿童常处于使用人工智能(AI)与被人工智能利用的前列,他们可能因而受到伤害。联合国儿童基金会(UNICEF)数字连接政策专家史蒂夫·沃斯卢认为:“由于儿童正处于智力、情感和身体的成长发育期,他们的可塑性很强。”
沃斯卢在UNICEF牵头起草了一套新的指导原则,旨在帮助政府和企业制定考虑到儿童需求的AI政策。这9条新原则于2020年9月16日发布,是多次征询决策者、儿童发展研究者、AI从业者和世界各地儿童代表意见的成果。这9条也参照了1989年联合国通过的人权公约《儿童权利公约》。
很多AI原则内容雷同,这套指导原则并不是要重复一遍。今年1月,哈佛大学伯克曼·克莱因互联网与社会中心发表了一篇报告,他们调查了36份指导国家和企业AI策略的最重要文件,发现了8个共同主题,包括隐私、安全、公正和可解释性。
其实,UNICEF制定的这套原则,是为了对已有的主题进行补充,使它们更适合儿童。例如,AI系统不应该仅仅是可解释的,还应是儿童可以理解的。AI系统也应考虑儿童特有的发展需求。沃斯卢说:“儿童拥有除了成人权利以外的其他一些权利。”而且,据估计,儿童至少占网络用户的三分之一。他指出:“我们正在谈论的并非是少数群体。”
除了减轻AI带来的危害,这些原则也是为了鼓励开发可促进儿童成长和福祉的AI体系。例如,经证实,如果设计良好,基于人工智能的学习工具能提升儿童的批判性思维和解决问题的技能,并且能为有学习障碍的孩子提供帮助。尽管处于较为初级的阶段,情感AI助手能提供心理健康支持,现已证明还能改善自闭症儿童的社交技能。面部识别技术如果谨慎使用,能帮助识别被绑架或拐卖的儿童。
还应该教授儿童AI相关知识,鼓励他们参与AI开发。沃斯卢说,这不仅是为了保护他们,也是为了赋予他们能动性,使其具备塑造自己未来的能力。
UNICEF不是唯一考虑这个问题的机构。在它发布这些原则草案的前一天,北京智源人工智能研究院(BAAI),这个由科学技术部和北京市政府支持的机构,也发布了一套儿童AI原则。
BAAI曾发布被视作中国国家AI发展指导性原则的《人工智能北京共识》,一年后,该机构又发布了这套由其人工智能伦理与可持续发展研究中心主任曾毅带头起草的新原则。曾毅说,这套专为儿童制定的新原则,意在“落实”那些比较笼统的原则。它们与UNICEF的原则很接近,也涉及隐私、公正、可解释性和儿童福祉,不过有些细节更针对中国关注的问题。比如,其中一条原则是促进儿童身体健康,包括使用AI技术帮助应对环境污染问题。
尽管这两项行动并无正式关联,可它们的发布时机却并非巧合。过去几年,大批AI原则涌现,两位首席起草人都认为,为儿童制定更具针对性的原则自然是下一步。曾毅说:“谈及弱势群体,儿童自然是最弱势的,这就是为什么我们切实需要特别关怀这类群体。”两个小组在起草各自文件时进行了协商。UNICEF在东亚举办咨询研讨会时,曾毅曾到会发言。
UNICEF现计划在各个合作国家开展一系列试点项目,观察这套原则在不同背景下的实用性和有效性。BAAI和几家大型企业的代表组建了一个工作小组,这些企业驱动着中国国家AI战略,包括科技教育公司好未来、消费类电子产品公司小米、计算机视觉公司旷视科技,以及互联网巨头百度。他们希望此举能使这些公司开始在产品中重视这些原则,带动其他企业和机构也这样做。
沃斯卢和曾毅都希望,通过阐明AI对儿童构成的独特挑战,这些原则将提升公众对这些问题的意识。沃斯卢说:“我们参与这项工作时便心中有数了。我们知道,对很多政府和企业来说,这是一片全新的领域。如果假以时日,企业在AI产品开发周期,或者政府在相关政策制定周期里,我们看到让儿童参与其中的更多案例,对儿童的数据采集和分析更谨慎;如果我们看到AI越来越能被孩子或他们的监护人所理解,就说明我们成功了。”
(译者为“《英语世界》杯”翻译大赛获奖者)
1亚马逊基于云计算所开发的智能语音助手,最初搭载在亚马逊智能音箱Echo上。与苹果的Siri和微软的Cortana一样,Alexa的设计宗旨是响应各种命令,甚至可以与用户对话。
2 2020年夏天,英国政府因为疫情封锁,采用软件预测学生成绩,使大约40%的学生分数低于预期,没有被预想的大学录取。
Algorithms can change the course of children’s lives. Kids are interacting with Alexas1 that can record their voice data and influence their speech and social development. They’re binging on videos on TikTok and YouTube pushed to them by recommendation systems that end up shaping their worldviews.
Algorithms are also increasingly used to determine what their education is like, whether they’ll receive health care, and even whether their parents are deemed fit to care for them. Sometimes this can have devastating effects: this past summer, for example, thousands of students lost their university admissions after algorithms—used in lieu of pandemic-canceled standardized tests—inaccurately predicted their academic performance2.
Children, in other words, are often at the forefront when it comes to using and being used by AI, and that can leave them in a position to get hurt. “Because they are developing intellectually and emotionally and physically, they are very shapeable,” says Steve Vosloo, a policy specialist for digital connectivity at UNICEF, the United Nations Children Fund.
Vosloo led the drafting of a new set of guidelines from UNICEF designed to help governments and companies develop AI policies that consider children’s needs. Released on September 16, the nine new guidelines are the culmination of several consultations held with policymakers, child development researchers, AI practitioners, and kids around the world. They also take into consideration the UN Convention on the Rights of the Child, a human rights treaty ratified in 1989.
The guidelines aren’t meant to be yet another set of AI principles, many of which already say the same things. In January of this year, a Harvard Berkman Klein Center review of 36 of the most prominent documents guiding national and company AI strategies found eight common themes—among them privacy, safety, fairness, and explainability.
Rather, the UNICEF guidelines are meant to complement these existing themes and tailor them to children. For example, AI systems shouldn’t just be explainable—they should be explainable to kids. They should also consider children’s unique developmental needs. “Children have additional rights to adults,” Vosloo says. They’re also estimated to account for at least one-third of online users. “We’re not talking about a minority group here,” he points out. In addition to mitigating AI harms, the goal of the principles is to encourage the development of AI systems that could improve children’s growth and well-being. If they’re designed well, for example, AI-based learning tools have been shown to improve children’s critical-thinking and problem-solving skills, and they can be useful for kids with learning disabilities. Emotional AI assistants, though relatively nascent, could provide mental-health support and have been demonstrated to improve the social skills of autistic children. Face recognition, used with careful limitations, could help identify children who’ve been kidnapped or trafficked.
Children should also be educated about AI and encouraged to participate in its development. It isn’t just about protecting them, Vosloo says. It’s about empowering them and giving them the agency to shape their future.
UNICEF isn’t the only one thinking about the issue. The day before those draft guidelines came out, the Beijing Academy of Artificial Intelligence (BAAI), an organization backed by the Chinese Ministry of Science and Technology and the Beijing municipal government, released a set of AI principles for children too.
The announcement follows a year after BAAI released the Beijing AI principles, understood to be the guiding values for China’s national AI development. The new principles outlined specifically for children are meant to be “a concrete implementation” of the more general ones, says Yi Zeng, the director of the AI Ethics and Sustainable Development Research Center at BAAI who led their drafting. They closely align with UNICEF’s guidelines, also touching on privacy, fairness, explainability, and child well-being, though some of the details are more specific to China’s concerns. A guideline to improve children’s physical health, for example, includes using AI to help tackle environmental pollution.
While the two efforts are not formally related, the timing is also not coincidental. After a flood of AI principles in the last few years, both lead drafters say creating more tailored guidelines for children was a logical next step. “Talking about disadvantaged groups, of course children are the most disadvantaged ones,” Zeng says. “This is why we really need [to give] special care to this group of people.” The teams conferred with one another as they drafted their respective documents. When UNICEF held a consultation workshop in East Asia, Zeng attended as a speaker. UNICEF now plans to run a series of pilot programs with various partner countries to observe how practical and effective their guidelines are in different contexts. BAAI has formed a working group with representatives from some of the largest companies driving the country’s national AI strategy, including education technology company TAL, consumer electronics company Xiaomi, computer vision company Megvii, and internet giant Baidu. The hope is to get them to start heeding the principles in their products and influence other companies and organizations to do the same.
Both Vosloo and Zeng hope that by articulating the unique concerns AI poses for children, the guidelines will raise awareness of these issues. “We come into this with eyes wide open,” Vosloo says. “We understand this is kind of new territory for many governments and companies. So if over time we see more examples of children being included in the AI or policy development cycle, more care around how their data is collected and analyzed—if we see AI made more explainable to children or to their caregivers—that would be a win for us.”
算法能改變孩子们的人生轨迹。现在,孩子们和亚马逊的智能语音助手Alexa互动,Alexa能记录他们的声音数据,影响他们的言语发展和社会性发展。他们沉溺于在抖音国际版和优兔上观看推荐系统推送给他们的视频,这些视频最终会影响他们的世界观。
算法也越来越多地用于决定儿童所受的教育,以及他们能否获得医疗服务,甚至判定父母是否适合抚养他们。有时这会导致极坏的结果:比如,2020年夏季,成千上万名学生因为算法——用于代替因疫情而取消的标准化测试——对其学习成绩作出不准确的预测而失去大学入学资格。
换言之,儿童常处于使用人工智能(AI)与被人工智能利用的前列,他们可能因而受到伤害。联合国儿童基金会(UNICEF)数字连接政策专家史蒂夫·沃斯卢认为:“由于儿童正处于智力、情感和身体的成长发育期,他们的可塑性很强。”
沃斯卢在UNICEF牵头起草了一套新的指导原则,旨在帮助政府和企业制定考虑到儿童需求的AI政策。这9条新原则于2020年9月16日发布,是多次征询决策者、儿童发展研究者、AI从业者和世界各地儿童代表意见的成果。这9条也参照了1989年联合国通过的人权公约《儿童权利公约》。
很多AI原则内容雷同,这套指导原则并不是要重复一遍。今年1月,哈佛大学伯克曼·克莱因互联网与社会中心发表了一篇报告,他们调查了36份指导国家和企业AI策略的最重要文件,发现了8个共同主题,包括隐私、安全、公正和可解释性。
其实,UNICEF制定的这套原则,是为了对已有的主题进行补充,使它们更适合儿童。例如,AI系统不应该仅仅是可解释的,还应是儿童可以理解的。AI系统也应考虑儿童特有的发展需求。沃斯卢说:“儿童拥有除了成人权利以外的其他一些权利。”而且,据估计,儿童至少占网络用户的三分之一。他指出:“我们正在谈论的并非是少数群体。”
除了减轻AI带来的危害,这些原则也是为了鼓励开发可促进儿童成长和福祉的AI体系。例如,经证实,如果设计良好,基于人工智能的学习工具能提升儿童的批判性思维和解决问题的技能,并且能为有学习障碍的孩子提供帮助。尽管处于较为初级的阶段,情感AI助手能提供心理健康支持,现已证明还能改善自闭症儿童的社交技能。面部识别技术如果谨慎使用,能帮助识别被绑架或拐卖的儿童。
还应该教授儿童AI相关知识,鼓励他们参与AI开发。沃斯卢说,这不仅是为了保护他们,也是为了赋予他们能动性,使其具备塑造自己未来的能力。
UNICEF不是唯一考虑这个问题的机构。在它发布这些原则草案的前一天,北京智源人工智能研究院(BAAI),这个由科学技术部和北京市政府支持的机构,也发布了一套儿童AI原则。
BAAI曾发布被视作中国国家AI发展指导性原则的《人工智能北京共识》,一年后,该机构又发布了这套由其人工智能伦理与可持续发展研究中心主任曾毅带头起草的新原则。曾毅说,这套专为儿童制定的新原则,意在“落实”那些比较笼统的原则。它们与UNICEF的原则很接近,也涉及隐私、公正、可解释性和儿童福祉,不过有些细节更针对中国关注的问题。比如,其中一条原则是促进儿童身体健康,包括使用AI技术帮助应对环境污染问题。
尽管这两项行动并无正式关联,可它们的发布时机却并非巧合。过去几年,大批AI原则涌现,两位首席起草人都认为,为儿童制定更具针对性的原则自然是下一步。曾毅说:“谈及弱势群体,儿童自然是最弱势的,这就是为什么我们切实需要特别关怀这类群体。”两个小组在起草各自文件时进行了协商。UNICEF在东亚举办咨询研讨会时,曾毅曾到会发言。
UNICEF现计划在各个合作国家开展一系列试点项目,观察这套原则在不同背景下的实用性和有效性。BAAI和几家大型企业的代表组建了一个工作小组,这些企业驱动着中国国家AI战略,包括科技教育公司好未来、消费类电子产品公司小米、计算机视觉公司旷视科技,以及互联网巨头百度。他们希望此举能使这些公司开始在产品中重视这些原则,带动其他企业和机构也这样做。
沃斯卢和曾毅都希望,通过阐明AI对儿童构成的独特挑战,这些原则将提升公众对这些问题的意识。沃斯卢说:“我们参与这项工作时便心中有数了。我们知道,对很多政府和企业来说,这是一片全新的领域。如果假以时日,企业在AI产品开发周期,或者政府在相关政策制定周期里,我们看到让儿童参与其中的更多案例,对儿童的数据采集和分析更谨慎;如果我们看到AI越来越能被孩子或他们的监护人所理解,就说明我们成功了。”
(译者为“《英语世界》杯”翻译大赛获奖者)
1亚马逊基于云计算所开发的智能语音助手,最初搭载在亚马逊智能音箱Echo上。与苹果的Siri和微软的Cortana一样,Alexa的设计宗旨是响应各种命令,甚至可以与用户对话。
2 2020年夏天,英国政府因为疫情封锁,采用软件预测学生成绩,使大约40%的学生分数低于预期,没有被预想的大学录取。