-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathnist_ai_rmf_playbook.json
1492 lines (1484 loc) · 401 KB
/
nist_ai_rmf_playbook.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
[
{
"type":"Govern",
"title":"GOVERN 1.1",
"category":"GOVERN-1",
"description":"Legal and regulatory requirements involving AI are understood, managed, and documented.",
"section_about":"AI systems may be subject to specific applicable legal and regulatory requirements. Some legal requirements can mandate (e.g., nondiscrimination, data privacy and security controls) documentation, disclosure, and increased AI system transparency. These requirements are complex and may not be applicable or differ across applications and contexts. \n \nFor example, AI system testing processes for bias measurement, such as disparate impact, are not applied uniformly within the legal context. Disparate impact is broadly defined as a facially neutral policy or practice that disproportionately harms a group based on a protected trait. Notably, some modeling algorithms or debiasing techniques that rely on demographic information, could also come into tension with legal prohibitions on disparate treatment (i.e., intentional discrimination).\n\nAdditionally, some intended users of AI systems may not have consistent or reliable access to fundamental internet technologies (a phenomenon widely described as the \u201cdigital divide\u201d) or may experience difficulties interacting with AI systems due to disabilities or impairments. Such factors may mean different communities experience bias or other negative impacts when trying to access AI systems. Failure to address such design issues may pose legal risks, for example in employment related activities affecting persons with disabilities.",
"section_actions":"* Maintain awareness of the applicable legal and regulatory considerations and requirements specific to industry, sector, and business purpose, as well as the application context of the deployed AI system.\n* Align risk management efforts with applicable legal standards.\n* Maintain policies for training (and re-training) organizational staff about necessary legal or regulatory considerations that may impact AI-related design, development and deployment activities.",
"section_doc":"### Organizations can document the following\n- To what extent has the entity defined and documented the regulatory environment\u2014including minimum requirements in laws and regulations?\n- Has the system been reviewed for its compliance to applicable laws, regulations, standards, and guidance? \n- To what extent has the entity defined and documented the regulatory environment\u2014including applicable requirements in laws and regulations? \n- Has the system been reviewed for its compliance to relevant applicable laws, regulations, standards, and guidance? \n\n### AI Transparency Resources\n\nGAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)",
"section_ref":"Andrew Smith, \"Using Artificial Intelligence and Algorithms,\" FTC Business Blog (2020). [URL](https:\/\/www.ftc.gov\/business-guidance\/blog\/2020\/04\/using-artificial-intelligence-and-algorithms)\n \nRebecca Kelly Slaughter, \"Algorithms and Economic Justice,\" ISP Digital Future Whitepaper & YJoLT Special Publication (2021). [URL](https:\/\/law.yale.edu\/sites\/default\/files\/area\/center\/isp\/documents\/algorithms_and_economic_justice_master_final.pdf)\n \nPatrick Hall, Benjamin Cox, Steven Dickerson, Arjun Ravi Kannan, Raghu Kulkarni, and Nicholas Schmidt, \"A United States fair lending perspective on machine learning,\" Frontiers in Artificial Intelligence 4 (2021). [URL](https:\/\/www.frontiersin.org\/articles\/10.3389\/frai.2021.695301\/full)\n\nAI Hiring Tools and the Law, Partnership on Employment & Accessible Technology (PEAT, peatworks.org). [URL](https:\/\/www.peatworks.org\/ai-disability-inclusion-toolkit\/ai-hiring-tools-and-the-law\/)",
"AI Actors":[
"Governance and Oversight"
],
"Topic":[
"Legal and Regulatory",
"Governance"
]
},
{
"type":"Govern",
"title":"GOVERN 1.2",
"category":"GOVERN-1",
"description":"The characteristics of trustworthy AI are integrated into organizational policies, processes, and procedures.",
"section_about":"Policies, processes, and procedures are central components of effective AI risk management and fundamental to individual and organizational accountability. All stakeholders benefit from policies, processes, and procedures which require preventing harm by design and default. \n\nOrganizational policies and procedures will vary based on available resources and risk profiles, but can help systematize AI actor roles and responsibilities throughout the AI lifecycle. Without such policies, risk management can be subjective across the organization, and exacerbate rather than minimize risks over time. Polices, or summaries thereof, are understandable to relevant AI actors. Policies reflect an understanding of the underlying metrics, measurements, and tests that are necessary to support policy and AI system design, development, deployment and use.\n\nLack of clear information about responsibilities and chains of command will limit the effectiveness of risk management.",
"section_actions":"Organizational AI risk management policies should be designed to:\n\n- Define key terms and concepts related to AI systems and the scope of their purposes and intended uses.\n- Connect AI governance to existing organizational governance and risk controls. \n- Align to broader data governance policies and practices, particularly the use of sensitive or otherwise risky data.\n- Detail standards for experimental design, data quality, and model training.\n- Outline and document risk mapping and measurement processes and standards.\n- Detail model testing and validation processes.\n- Detail review processes for legal and risk functions.\n- Establish the frequency of and detail for monitoring, auditing and review processes.\n- Outline change management requirements.\n- Outline processes for internal and external stakeholder engagement.\n- Establish whistleblower policies to facilitate reporting of serious AI system concerns.\n- Detail and test incident response plans.\n- Verify that formal AI risk management policies align to existing legal standards, and industry best practices and norms.\n- Establish AI risk management policies that broadly align to AI system trustworthy characteristics.\n- Verify that formal AI risk management policies include currently deployed and third-party AI systems.",
"section_doc":"### Organizations can document the following\n- To what extent do these policies foster public trust and confidence in the use of the AI system?\n- What policies has the entity developed to ensure the use of the AI system is consistent with its stated values and principles?\n- What policies and documentation has the entity developed to encourage the use of its AI system as intended?\n- To what extent are the model outputs consistent with the entity\u2019s values and principles to foster public trust and equity?\n\n### AI Transparency Resources\n\n\nGAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)",
"section_ref":"Off. Comptroller Currency, Comptroller\u2019s Handbook: Model Risk Management (Aug. 2021). [URL](https:\/\/www.occ.gov\/publications-and-resources\/publications\/comptrollers-handbook\/files\/model-risk-management\/index-model-risk-management.html)\n\nGAO, \u201cArtificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities,\u201d GAO@100 (GAO-21-519SP), June 2021. [URL](https:\/\/www.gao.gov\/assets\/gao-21-519sp.pdf)\n\nNIST, \"U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools\". [URL](https:\/\/www.nist.gov\/system\/files\/documents\/2019\/08\/10\/ai_standards_fedengagement_plan_9aug2019.pdf)\n\nLipton, Zachary and McAuley, Julian and Chouldechova, Alexandra, Does mitigating ML\u2019s impact disparity require treatment disparity? Advances in Neural Information Processing Systems, 2018. [URL](https:\/\/proceedings.neurips.cc\/paper\/2018\/file\/8e0384779e58ce2af40eb365b318cc32-Paper.pdf)\n\nJessica Newman (2023) \u201cA Taxonomy of Trustworthiness for Artificial Intelligence: Connecting Properties of Trustworthiness with Risk Management and the AI Lifecycle,\u201d UC Berkeley Center for Long-Term Cybersecurity. [URL](https:\/\/cltc.berkeley.edu\/wp-content\/uploads\/2023\/01\/Taxonomy_of_AI_Trustworthiness.pdf)\n\nEmily Hadley (2022). Prioritizing Policies for Furthering Responsible Artificial Intelligence in the United States. 2022 IEEE International Conference on Big Data (Big Data), 5029-5038. [URL](https:\/\/arxiv.org\/abs\/2212.00740) \n\nSAS Institute, \u201cThe SAS\u00ae Data Governance Framework: A Blueprint for Success\u201d. [URL](https:\/\/www.sas.com\/content\/dam\/SAS\/en_us\/doc\/whitepaper1\/sas-data-governance-framework-107325.pdf)\n\nISO, \u201cInformation technology \u2014 Reference Model of Data Management, \u201c ISO\/IEC TR 10032:200. [URL](https:\/\/www.iso.org\/standard\/38607.html)\n\n\u201cPlay 5: Create a formal policy,\u201d Partnership on Employment & Accessible Technology (PEAT, peatworks.org). [URL](https:\/\/www.peatworks.org\/ai-disability-inclusion-toolkit\/the-equitable-ai-playbook\/play-5-create-a-formal-equitable-ai-policy\/) \n\n\"National Institute of Standards and Technology. (2018). Framework for improving critical infrastructure cybersecurity. [URL](https:\/\/nvlpubs.nist.gov\/nistpubs\/cswp\/nist.cswp.04162018.pdf)\n\nKaitlin R. Boeckl and Naomi B. Lefkovitz. \"NIST Privacy Framework: A Tool for Improving Privacy Through Enterprise Risk Management, Version 1.0.\" National Institute of Standards and Technology (NIST), January 16, 2020. [URL](https:\/\/www.nist.gov\/publications\/nist-privacy-framework-tool-improving-privacy-through-enterprise-risk-management.)\n\n\u201cplainlanguage.gov \u2013 Home,\u201d The U.S. Government. [URL](https:\/\/www.plainlanguage.gov\/)",
"AI Actors":[
"Governance and Oversight"
],
"Topic":[
"Trustworthy Characteristics",
"Governance",
"Validity and Reliability",
"Safety",
"Secure and Resilient",
"Accountability and Transparency",
"Explainability and Interpretability",
"Privacy",
"Fairness and Bias"
]
},
{
"type":"Govern",
"title":"GOVERN 1.3",
"category":"GOVERN-1",
"description":"Processes and procedures are in place to determine the needed level of risk management activities based on the organization's risk tolerance.",
"section_about":"Risk management resources are finite in any organization. Adequate AI governance policies delineate the mapping, measurement, and prioritization of risks to allocate resources toward the most material issues for an AI system to ensure effective risk management. Policies may specify systematic processes for assigning mapped and measured risks to standardized risk scales. \n\nAI risk tolerances range from negligible to critical \u2013 from, respectively, almost no risk to risks that can result in irredeemable human, reputational, financial, or environmental losses. Risk tolerance rating policies consider different sources of risk, (e.g., financial, operational, safety and wellbeing, business, reputational, or model risks). A typical risk measurement approach entails the multiplication, or qualitative combination, of measured or estimated impact and likelihood of impacts into a risk score (risk \u2248 impact x likelihood). This score is then placed on a risk scale. Scales for risk may be qualitative, such as red-amber-green (RAG), or may entail simulations or econometric approaches. Impact assessments are a common tool for understanding the severity of mapped risks. In the most fulsome AI risk management approaches, all models are assigned to a risk level.",
"section_actions":"- Establish policies to define mechanisms for measuring or understanding an AI system\u2019s potential impacts, e.g., via regular impact assessments at key stages in the AI lifecycle, connected to system impacts and frequency of system updates.\n- Establish policies to define mechanisms for measuring or understanding the likelihood of an AI system\u2019s impacts and their magnitude at key stages in the AI lifecycle. \n- Establish policies that define assessment scales for measuring potential AI system impact. Scales may be qualitative, such as red-amber-green (RAG), or may entail simulations or econometric approaches. \n- Establish policies for assigning an overall risk measurement approach for an AI system, or its important components, e.g., via multiplication or combination of a mapped risk\u2019s impact and likelihood (risk \u2248 impact x likelihood).\n- Establish policies to assign systems to uniform risk scales that are valid across the organization\u2019s AI portfolio (e.g. documentation templates), and acknowledge risk tolerance and risk levels may change over the lifecycle of an AI system.",
"section_doc":"### Organizations can document the following\n- How do system performance metrics inform risk tolerance decisions?\n- What policies has the entity developed to ensure the use of the AI system is consistent with organizational risk tolerance?\n- How do the entity\u2019s data security and privacy assessments inform risk tolerance decisions?\n\n\n### AI Transparency Resources\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)",
"section_ref":"Board of Governors of the Federal Reserve System. SR 11-7: Guidance on Model Risk Management. (April 4, 2011). [URL](https:\/\/www.federalreserve.gov\/supervisionreg\/srletters\/sr1107.htm)\n\nThe Office of the Comptroller of the Currency. Enterprise Risk Appetite Statement. (Nov. 20, 2019). [URL](https:\/\/www.occ.treas.gov\/publications-and-resources\/publications\/banker-education\/files\/pub-risk-appetite-statement.pdf)\n\nBrenda Boultwood, How to Develop an Enterprise Risk-Rating Approach (Aug. 26, 2021). Global Association of Risk Professionals (garp.org). Accessed Jan. 4, 2023. [URL](https:\/\/www.garp.org\/risk-intelligence\/culture-governance\/how-to-develop-an-enterprise-risk-rating-approach)\n\nGAO-17-63: Enterprise Risk Management: Selected Agencies\u2019 Experiences Illustrate Good Practices in Managing Risk. [URL](https:\/\/www.gao.gov\/assets\/gao-17-63.pdf)",
"AI Actors":[
"Governance and Oversight"
],
"Topic":[
"Risk Tolerance",
"Governance"
]
},
{
"type":"Govern",
"title":"GOVERN 1.4",
"category":"GOVERN-1",
"description":"The risk management process and its outcomes are established through transparent policies, procedures, and other controls based on organizational risk priorities.",
"section_about":"Clear policies and procedures relating to documentation and transparency facilitate and enhance efforts to communicate roles and responsibilities for the Map, Measure and Manage functions across the AI lifecycle. Standardized documentation can help organizations systematically integrate AI risk management processes and enhance accountability efforts. For example, by adding their contact information to a work product document, AI actors can improve communication, increase ownership of work products, and potentially enhance consideration of product quality. Documentation may generate downstream benefits related to improved system replicability and robustness. Proper documentation storage and access procedures allow for quick retrieval of critical information during a negative incident. Explainable machine learning efforts (models and explanatory methods) may bolster technical documentation practices by introducing additional information for review and interpretation by AI Actors.",
"section_actions":"- Establish and regularly review documentation policies that, among others, address information related to:\n - AI actors contact informations\n - Business justification\n - Scope and usages\n - Expected and potential risks and impacts\n - Assumptions and limitations\n - Description and characterization of training data\n - Algorithmic methodology\n - Evaluated alternative approaches\n - Description of output data\n - Testing and validation results (including explanatory visualizations and information)\n - Down- and up-stream dependencies\n - Plans for deployment, monitoring, and change management\n - Stakeholder engagement plans\n- Verify documentation policies for AI systems are standardized across the organization and remain current.\n- Establish policies for a model documentation inventory system and regularly review its completeness, usability, and efficacy.\n- Establish mechanisms to regularly review the efficacy of risk management processes.\n- Identify AI actors responsible for evaluating efficacy of risk management processes and approaches, and for course-correction based on results.\n- Establish policies and processes regarding public disclosure of the use of AI and risk management material such as impact assessments, audits, model documentation and validation and testing results.\n- Document and review the use and efficacy of different types of transparency tools and follow industry standards at the time a model is in use.",
"section_doc":"### Organizations can document the following\n- To what extent has the entity clarified the roles, responsibilities, and delegated authorities to relevant stakeholders?\n- What are the roles, responsibilities, and delegation of authorities of personnel involved in the design, development, deployment, assessment and monitoring of the AI system?\n- How will the appropriate performance metrics, such as accuracy, of the AI be monitored after the AI is deployed? How much distributional shift or model drift from baseline performance is acceptable?\n\n### AI Transparency Resources\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- Intel.gov: AI Ethics Framework for Intelligence Community - 2020. [URL](https:\/\/www.intelligence.gov\/artificial-intelligence-ethics-framework-for-the-intelligence-community)",
"section_ref":"Bd. Governors Fed. Rsrv. Sys., Supervisory Guidance on Model Risk Management, SR Letter 11-7 (Apr. 4, 2011).\n\nOff. Comptroller Currency, Comptroller\u2019s Handbook: Model Risk Management (Aug. 2021). [URL](https:\/\/www.occ.gov\/publications-and-resources\/publications\/comptrollers-handbook\/files\/model-risk-management\/index-model-risk-management.html)\n\nMargaret Mitchell et al., \u201cModel Cards for Model Reporting.\u201d Proceedings of 2019 FATML Conference. [URL](https:\/\/arxiv.org\/pdf\/1810.03993.pdf)\n\nTimnit Gebru et al., \u201cDatasheets for Datasets,\u201d Communications of the ACM 64, No. 12, 2021. [URL](https:\/\/arxiv.org\/pdf\/1803.09010.pdf)\n\nEmily M. Bender, Batya Friedman, Angelina McMillan-Major (2022). A Guide for Writing Data Statements for Natural Language Processing. University of Washington. Accessed July 14, 2022. [URL](https:\/\/techpolicylab.uw.edu\/wp-content\/uploads\/2021\/11\/Data_Statements_Guide_V2.pdf)\n\nM. Arnold, R. K. E. Bellamy, M. Hind, et al. FactSheets: Increasing trust in AI services through supplier\u2019s declarations of conformity. IBM Journal of Research and Development 63, 4\/5 (July-September 2019), 6:1-6:13. [URL](https:\/\/techpolicylab.uw.edu\/wp-content\/uploads\/2021\/11\/Data_Statements_Guide_V2.pdf)\n\nNavdeep Gill, Abhishek Mathur, Marcos V. Conde (2022). A Brief Overview of AI Governance for Responsible Machine Learning Systems. ArXiv, abs\/2211.13130. [URL](https:\/\/arxiv.org\/pdf\/2211.13130.pdf)\n\nJohn Richards, David Piorkowski, Michael Hind, et al. A Human-Centered Methodology for Creating AI FactSheets. Bulletin of the IEEE Computer Society Technical Committee on Data Engineering. [URL](http:\/\/sites.computer.org\/debull\/A21dec\/p47.pdf)\n\nChristoph Molnar, Interpretable Machine Learning, lulu.com. [URL](https:\/\/christophm.github.io\/interpretable-ml-book\/)\n\nDavid A. Broniatowski. 2021. Psychological Foundations of Explainability and Interpretability in Artificial Intelligence. National Institute of Standards and Technology (NIST) IR 8367. National Institute of Standards and Technology, Gaithersburg, MD. [URL](https:\/\/doi.org\/10.6028\/NIST.IR.8367)\n\nOECD (2022), \u201cOECD Framework for the Classification of AI systems\u201d, OECD Digital Economy Papers, No. 323, OECD Publishing, Paris. [URL](https:\/\/doi.org\/10.1787\/cb6d9eca-en)",
"AI Actors":[
"Governance and Oversight"
],
"Topic":[
"Risk Management",
"Governance",
"Documentation"
]
},
{
"type":"Govern",
"title":"GOVERN 1.5",
"category":"GOVERN-1",
"description":"Ongoing monitoring and periodic review of the risk management process and its outcomes are planned, organizational roles and responsibilities are clearly defined, including determining the frequency of periodic review.",
"section_about":"AI systems are dynamic and may perform in unexpected ways once deployed or after deployment. Continuous monitoring is a risk management process for tracking unexpected issues and performance changes, in real-time or at a specific frequency, across the AI system lifecycle.\n\nIncident response and \u201cappeal and override\u201d are commonly used processes in information technology management. These processes enable real-time flagging of potential incidents, and human adjudication of system outcomes.\n\nEstablishing and maintaining incident response plans can reduce the likelihood of additive impacts during an AI incident. Smaller organizations which may not have fulsome governance programs, can utilize incident response plans for addressing system failures, abuse or misuse.",
"section_actions":"- Establish policies to allocate appropriate resources and capacity for assessing impacts of AI systems on individuals, communities and society.\n- Establish policies and procedures for monitoring and addressing AI system performance and trustworthiness, including bias and security problems, across the lifecycle of the system.\n- Establish policies for AI system incident response, or confirm that existing incident response policies apply to AI systems.\n- Establish policies to define organizational functions and personnel responsible for AI system monitoring and incident response activities.\n- Establish mechanisms to enable the sharing of feedback from impacted individuals or communities about negative impacts from AI systems.\n- Establish mechanisms to provide recourse for impacted individuals or communities to contest problematic AI system outcomes.\n- Establish opt-out mechanisms.",
"section_doc":"### Organizations can document the following\n- To what extent does the system\/entity consistently measure progress towards stated goals and objectives?\n- Did your organization implement a risk management system to address risks involved in deploying the identified AI solution (e.g. personnel risk or changes to commercial objectives)?\n- Did your organization address usability problems and test whether user interfaces served their intended purposes? \n\n### AI Transparency Resources\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- WEF Model AI Governance Framework Assessment 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGModelAIGovFramework2.pdf)",
"section_ref":"National Institute of Standards and Technology. (2018). Framework for improving critical infrastructure cybersecurity. [URL](https:\/\/nvlpubs.nist.gov\/nistpubs\/cswp\/nist.cswp.04162018.pdf)\n\nNational Institute of Standards and Technology. (2012). Computer Security Incident Handling Guide. NIST Special Publication 800-61 Revision 2. [URL](https:\/\/nvlpubs.nist.gov\/nistpubs\/specialpublications\/nist.sp.800-61r2.pdf)",
"AI Actors":[
"Governance and Oversight",
"Operation and Monitoring"
],
"Topic":[
"Continuous monitoring",
"Governance"
]
},
{
"type":"Govern",
"title":"GOVERN 1.6",
"category":"GOVERN-1",
"description":"Mechanisms are in place to inventory AI systems and are resourced according to organizational risk priorities.",
"section_about":"An AI system inventory is an organized database of artifacts relating to an AI system or model. It may include system documentation, incident response plans, data dictionaries, links to implementation software or source code, names and contact information for relevant AI actors, or other information that may be helpful for model or system maintenance and incident response purposes. AI system inventories also enable a holistic view of organizational AI assets. A serviceable AI system inventory may allow for the quick resolution of:\n\n- specific queries for single models, such as \u201cwhen was this model last refreshed?\u201d \n- high-level queries across all models, such as, \u201chow many models are currently deployed within our organization?\u201d or \u201chow many users are impacted by our models?\u201d \n\nAI system inventories are a common element of traditional model risk management approaches and can provide technical, business and risk management benefits. Typically inventories capture all organizational models or systems, as partial inventories may not provide the value of a full inventory.",
"section_actions":"- Establish policies that define the creation and maintenance of AI system inventories. \n- Establish policies that define a specific individual or team that is responsible for maintaining the inventory. \n- Establish policies that define which models or systems are inventoried, with preference to inventorying all models or systems, or minimally, to high risk models or systems, or systems deployed in high-stakes settings.\n- Establish policies that define model or system attributes to be inventoried, e.g, documentation, links to source code, incident response plans, data dictionaries, AI actor contact information.",
"section_doc":"### Organizations can document the following\n- Who is responsible for documenting and maintaining the AI system inventory details?\n- What processes exist for data generation, acquisition\/collection, ingestion, staging\/storage, transformations, security, maintenance, and dissemination?\n- Given the purpose of this AI, what is an appropriate interval for checking whether it is still accurate, unbiased, explainable, etc.? What are the checks for this model?\n\n### AI Transparency Resources\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- Intel.gov: AI Ethics Framework for Intelligence Community - 2020. [URL](https:\/\/www.intelligence.gov\/artificial-intelligence-ethics-framework-for-the-intelligence-community)",
"section_ref":"\u201cA risk-based integrity level schema\u201d, in IEEE 1012, IEEE Standard for System, Software, and Hardware Verification and Validation. See Annex B. [URL](https:\/\/ieeexplore.ieee.org\/stamp\/stamp.jsp?arnumber=1488512)\n\nOff. Comptroller Currency, Comptroller\u2019s Handbook: Model Risk Management (Aug. 2021). See \u201cModel Inventory,\u201d pg. 26. [URL](https:\/\/www.occ.gov\/publications-and-resources\/publications\/comptrollers-handbook\/files\/model-risk-management\/index-model-risk-management.html) \n\nVertaAI, \u201cModelDB: An open-source system for Machine Learning model versioning, metadata, and experiment management.\u201d Accessed Jan. 5, 2023. [URL](https:\/\/github.com\/VertaAI\/modeldb)",
"AI Actors":[
"Governance and Oversight"
],
"Topic":[
"Risk Management",
"Governance",
"Data",
"Documentation"
]
},
{
"type":"Govern",
"title":"GOVERN 1.7",
"category":"GOVERN-1",
"description":"Processes and procedures are in place for decommissioning and phasing out of AI systems safely and in a manner that does not increase risks or decrease the organization\u2019s trustworthiness.",
"section_about":"Irregular or indiscriminate termination or deletion of models or AI systems may be inappropriate and increase organizational risk. For example, AI systems may be subject to regulatory requirements or implicated in future security or legal investigations. To maintain trust, organizations may consider establishing policies and processes for the systematic and deliberate decommissioning of AI systems. Typically, such policies consider user and community concerns, risks in dependent and linked systems, and security, legal or regulatory concerns. Decommissioned models or systems may be stored in a model inventory along with active models, for an established length of time.",
"section_actions":"- Establish policies for decommissioning AI systems. Such policies typically address:\n\t- User and community concerns, and reputational risks. \n\t- Business continuity and financial risks.\n\t- Up and downstream system dependencies. \n\t- Regulatory requirements (e.g., data retention). \n\t- Potential future legal, regulatory, security or forensic investigations.\n\t- Migration to the replacement system, if appropriate.\n- Establish policies that delineate where and for how long decommissioned systems, models and related artifacts are stored. \n- Establish policies that address ancillary data or artifacts that must be preserved for fulsome understanding or execution of the decommissioned AI system, e.g., predictions, explanations, intermediate input feature representations, usernames and passwords, etc.",
"section_doc":"### Organizations can document the following\n- What processes exist for data generation, acquisition\/collection, ingestion, staging\/storage, transformations, security, maintenance, and dissemination?\n- To what extent do these policies foster public trust and confidence in the use of the AI system?\n- If anyone believes that the AI no longer meets this ethical framework, who will be responsible for receiving the concern and as appropriate investigating and remediating the issue? Do they have authority to modify, limit, or stop the use of the AI?\n- If it relates to people, were there any ethical review applications\/reviews\/approvals? (e.g. Institutional Review Board applications)\n\n### AI Transparency Resources\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- Intel.gov: AI Ethics Framework for Intelligence Community - 2020. [URL](https:\/\/www.intelligence.gov\/artificial-intelligence-ethics-framework-for-the-intelligence-community)\n- Datasheets for Datasets. [URL](http:\/\/arxiv.org\/abs\/1803.09010)",
"section_ref":"Michelle De Mooy, Joseph Jerome and Vijay Kasschau, \u201cShould It Stay or Should It Go? The Legal, Policy and Technical Landscape Around Data Deletion,\u201d Center for Democracy and Technology, 2017. [URL](https:\/\/cdt.org\/wp-content\/uploads\/2017\/02\/2017-02-23-Data-Deletion-FNL2.pdf)\n\nBurcu Baykurt, \"Algorithmic accountability in US cities: Transparency, impact, and political economy.\" Big Data & Society 9, no. 2 (2022): 20539517221115426. [URL](https:\/\/journals.sagepub.com\/doi\/full\/10.1177\/20539517221115426)\n\n\u201cInformation System Decommissioning Guide,\u201d Bureau of Land Management, 2011. [URL](https:\/\/www.blm.gov\/sites\/blm.gov\/files\/uploads\/IM2011-174_att1.pdf)",
"AI Actors":[
"AI Deployment",
"Operation and Monitoring"
],
"Topic":[
"Decommission",
"Governance"
]
},
{
"type":"Govern",
"title":"GOVERN 2.1",
"category":"GOVERN-2",
"description":"Roles and responsibilities and lines of communication related to mapping, measuring, and managing AI risks are documented and are clear to individuals and teams throughout the organization.",
"section_about":"The development of a risk-aware organizational culture starts with defining responsibilities. For example, under some risk management structures, professionals carrying out test and evaluation tasks are independent from AI system developers and report through risk management functions or directly to executives. This kind of structure may help counter implicit biases such as groupthink or sunk cost fallacy and bolster risk management functions, so efforts are not easily bypassed or ignored.\n\nInstilling a culture where AI system design and implementation decisions can be questioned and course- corrected by empowered AI actors can enhance organizations\u2019 abilities to anticipate and effectively manage risks before they become ingrained.",
"section_actions":"- Establish policies that define the AI risk management roles and responsibilities for positions directly and indirectly related to AI systems, including, but not limited to\n - Boards of directors or advisory committees\n - Senior management\n - AI audit functions\n - Product management\n - Project management\n - AI design\n - AI development\n - Human-AI interaction\n - AI testing and evaluation\n - AI acquisition and procurement\n - Impact assessment functions\n - Oversight functions\n- Establish policies that promote regular communication among AI actors participating in AI risk management efforts.\n- Establish policies that separate management of AI system development functions from AI system testing functions, to enable independent course-correction of AI systems.\n- Establish policies to identify, increase the transparency of, and prevent conflicts of interest in AI risk management efforts.\n- Establish policies to counteract confirmation bias and market incentives that may hinder AI risk management efforts.\n- Establish policies that incentivize AI actors to collaborate with existing legal, oversight, compliance, or enterprise risk functions in their AI risk management activities.",
"section_doc":"### Organizations can document the following\n- To what extent has the entity clarified the roles, responsibilities, and delegated authorities to relevant stakeholders?\n- Who is ultimately responsible for the decisions of the AI and is this person aware of the intended uses and limitations of the analytic?\n- Are the responsibilities of the personnel involved in the various AI governance processes clearly defined?\n- What are the roles, responsibilities, and delegation of authorities of personnel involved in the design, development, deployment, assessment and monitoring of the AI system?\n- Did your organization implement accountability-based practices in data management and protection (e.g. the PDPA and OECD Privacy Principles)?\n\n### AI Transparency Resources\n- WEF Model AI Governance Framework Assessment 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGModelAIGovFramework2.pdf)\n- WEF Companion to the Model AI Governance Framework- 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGIsago.pdf)\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)",
"section_ref":"Andrew Smith, \u201cUsing Artificial Intelligence and Algorithms,\u201d FTC Business Blog (Apr. 8, 2020). [URL](https:\/\/www.ftc.gov\/news-events\/blogs\/business-blog\/2020\/04\/using-artificial-intelligence-algorithms)\n\nOff. Superintendent Fin. Inst. Canada, Enterprise-Wide Model Risk Management for Deposit-Taking Institutions, E-23 (Sept. 2017).\n\nBd. Governors Fed. Rsrv. Sys., Supervisory Guidance on Model Risk Management, SR Letter 11-7 (Apr. 4, 2011).\n\nOff. Comptroller Currency, Comptroller\u2019s Handbook: Model Risk Management (Aug. 2021). [URL](https:\/\/www.occ.gov\/publications-and-resources\/publications\/comptrollers-handbook\/files\/model-risk-management\/index-model-risk-management.html)\n\nISO, \u201cInformation Technology \u2014 Artificial Intelligence \u2014 Guidelines for AI applications,\u201d ISO\/IEC CD 5339. See Section 6, \u201cStakeholders\u2019 perspectives and AI application framework.\u201d [URL](https:\/\/www.iso.org\/standard\/81120.html)",
"AI Actors":[
"Governance and Oversight"
],
"Topic":[
"Governance",
"Risk Culture"
]
},
{
"type":"Govern",
"title":"GOVERN 2.2",
"category":"GOVERN-2",
"description":"The organization\u2019s personnel and partners receive AI risk management training to enable them to perform their duties and responsibilities consistent with related policies, procedures, and agreements.",
"section_about":"To enhance AI risk management adoption and effectiveness, organizations are encouraged to identify and integrate appropriate training curricula into enterprise learning requirements. Through regular training, AI actors can maintain awareness of:\n\n- AI risk management goals and their role in achieving them.\n- Organizational policies, applicable laws and regulations, and industry best practices and norms.\n\nSee [MAP 3.4]() and [3.5]() for additional relevant information.",
"section_actions":"- Establish policies for personnel addressing ongoing education about:\n\t- Applicable laws and regulations for AI systems.\n\t- Potential negative impacts that may arise from AI systems.\n\t- Organizational AI policies.\n\t- Trustworthy AI characteristics.\n- Ensure that trainings are suitable across AI actor sub-groups - for AI actors carrying out technical tasks (e.g., developers, operators, etc.) as compared to AI actors in oversight roles (e.g., legal, compliance, audit, etc.). \n- Ensure that trainings comprehensively address technical and socio-technical aspects of AI risk management. \n- Verify that organizational AI policies include mechanisms for internal AI personnel to acknowledge and commit to their roles and responsibilities.\n- Verify that organizational policies address change management and include mechanisms to communicate and acknowledge substantial AI system changes.\n- Define paths along internal and external chains of accountability to escalate risk concerns.",
"section_doc":"### Organizations can document the following\n- Are the relevant staff dealing with AI systems properly trained to interpret AI model output and decisions as well as to detect and manage bias in data?\n- How does the entity determine the necessary skills and experience needed to design, develop, deploy, assess, and monitor the AI system?\n- How does the entity assess whether personnel have the necessary skills, training, resources, and domain knowledge to fulfill their assigned responsibilities?\n- What efforts has the entity undertaken to recruit, develop, and retain a workforce with backgrounds, experience, and perspectives that reflect the community impacted by the AI system?\n\n### AI Transparency Resources\n- WEF Model AI Governance Framework Assessment 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGModelAIGovFramework2.pdf)\n- WEF Companion to the Model AI Governance Framework- 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGIsago.pdf)\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)",
"section_ref":"Off. Comptroller Currency, Comptroller\u2019s Handbook: Model Risk Management (Aug. 2021). [URL](https:\/\/www.occ.gov\/publications-and-resources\/publications\/comptrollers-handbook\/files\/model-risk-management\/index-model-risk-management.html)\n\n\u201cDeveloping Staff Trainings for Equitable AI,\u201d Partnership on Employment & Accessible Technology (PEAT, peatworks.org). [URL](https:\/\/www.peatworks.org\/ai-disability-inclusion-toolkit\/ai-disability-inclusion-resources\/developing-staff-trainings-for-equitable-ai\/)",
"AI Actors":[
"Governance and Oversight"
],
"Topic":[
"Governance",
"Training"
]
},
{
"type":"Govern",
"title":"GOVERN 2.3",
"category":"GOVERN-2",
"description":"Executive leadership of the organization takes responsibility for decisions about risks associated with AI system development and deployment.",
"section_about":"Senior leadership and members of the C-Suite in organizations that maintain an AI portfolio, should maintain awareness of AI risks, affirm the organizational appetite for such risks, and be responsible for managing those risks..\n\nAccountability ensures that a specific team and individual is responsible for AI risk management efforts. Some organizations grant authority and resources (human and budgetary) to a designated officer who ensures adequate performance of the institution\u2019s AI portfolio (e.g. predictive modeling, machine learning).",
"section_actions":"- Organizational management can:\n - Declare risk tolerances for developing or using AI systems.\n - Support AI risk management efforts, and play an active role in such efforts.\n - Integrate a risk and harm prevention mindset throughout the AI lifecycle as part of organizational culture\n - Support competent risk management executives.\n - Delegate the power, resources, and authorization to perform risk management to each appropriate level throughout the management chain.\n- Organizations can establish board committees for AI risk management and oversight functions and integrate those functions within the organization\u2019s broader enterprise risk management approaches.",
"section_doc":"### Organizations can document the following\n- Did your organization\u2019s board and\/or senior management sponsor, support and participate in your organization\u2019s AI governance?\n- What are the roles, responsibilities, and delegation of authorities of personnel involved in the design, development, deployment, assessment and monitoring of the AI system?\n- Do AI solutions provide sufficient information to assist the personnel to make an informed decision and take actions accordingly?\n- To what extent has the entity clarified the roles, responsibilities, and delegated authorities to relevant stakeholders?\n\n### AI Transparency Resources\n- WEF Companion to the Model AI Governance Framework- 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGIsago.pdf)\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)",
"section_ref":"Bd. Governors Fed. Rsrv. Sys., Supervisory Guidance on Model Risk Management, SR Letter 11-7 (Apr. 4, 2011)\n\nOff. Superintendent Fin. Inst. Canada, Enterprise-Wide Model Risk Management for Deposit-Taking Institutions, E-23 (Sept. 2017).",
"AI Actors":[
"Governance and Oversight"
],
"Topic":[
"Governance",
"Risk Tolerance"
]
},
{
"type":"Govern",
"title":"GOVERN 3.1",
"category":"GOVERN-3",
"description":"Decision-makings related to mapping, measuring, and managing AI risks throughout the lifecycle is informed by a diverse team (e.g., diversity of demographics, disciplines, experience, expertise, and backgrounds).",
"section_about":"A diverse team that includes AI actors with diversity of experience, disciplines, and backgrounds to enhance organizational capacity and capability for anticipating risks is better equipped to carry out risk management. Consultation with external personnel may be necessary when internal teams lack a diverse range of lived experiences or disciplinary expertise.\n\nTo extend the benefits of diversity, equity, and inclusion to both the users and AI actors, it is recommended that teams are composed of a diverse group of individuals who reflect a range of backgrounds, perspectives and expertise.\n\nWithout commitment from senior leadership, beneficial aspects of team diversity and inclusion can be overridden by unstated organizational incentives that inadvertently conflict with the broader values of a diverse workforce.",
"section_actions":"Organizational management can:\n\n- Define policies and hiring practices at the outset that promote interdisciplinary roles, competencies, skills, and capacity for AI efforts.\n- Define policies and hiring practices that lead to demographic and domain expertise diversity; empower staff with necessary resources and support, and facilitate the contribution of staff feedback and concerns without fear of reprisal.\n- Establish policies that facilitate inclusivity and the integration of new insights into existing practice.\n- Seek external expertise to supplement organizational diversity, equity, inclusion, and accessibility where internal expertise is lacking.\n- Establish policies that incentivize AI actors to collaborate with existing nondiscrimination, accessibility and accommodation, and human resource functions, employee resource group (ERGs), and diversity, equity, inclusion, and accessibility (DEIA) initiatives.",
"section_doc":"### Organizations can document the following\n- Are the relevant staff dealing with AI systems properly trained to interpret AI model output and decisions as well as to detect and manage bias in data?\n- Entities include diverse perspectives from technical and non-technical communities throughout the AI life cycle to anticipate and mitigate unintended consequences including potential bias and discrimination.\n- Stakeholder involvement: Include diverse perspectives from a community of stakeholders throughout the AI life cycle to mitigate risks.\n- Strategies to incorporate diverse perspectives include establishing collaborative processes and multidisciplinary teams that involve subject matter experts in data science, software development, civil liberties, privacy and security, legal counsel, and risk management.\n- To what extent are the established procedures effective in mitigating bias, inequity, and other concerns resulting from the system?\n\n### AI Transparency Resources\n- WEF Model AI Governance Framework Assessment 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGModelAIGovFramework2.pdf)\n- Datasheets for Datasets. [URL](http:\/\/arxiv.org\/abs\/1803.09010)",
"section_ref":"Dylan Walsh, \u201cHow can human-centered AI fight bias in machines and people?\u201d MIT Sloan Mgmt. Rev., 2021. [URL](https:\/\/mitsloan.mit.edu\/ideas-made-to-matter\/how-can-human-centered-ai-fight-bias-machines-and-people)\n\nMichael Li, \u201cTo Build Less-Biased AI, Hire a More Diverse Team,\u201d Harvard Bus. Rev., 2020. [URL](https:\/\/hbr.org\/2020\/10\/to-build-less-biased-ai-hire-a-more-diverse-team)\n\nBo Cowgill et al., \u201cBiased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics,\u201d 2020. [URL](https:\/\/arxiv.org\/pdf\/2012.02394.pdf)\n\nNaomi Ellemers, Floortje Rink, \u201cDiversity in work groups,\u201d Current opinion in psychology, vol. 11, pp. 49\u201353, 2016.\n\nKatrin Talke, S\u00f8ren Salomo, Alexander Kock, \u201cTop management team diversity and strategic innovation orientation: The relationship and consequences for innovativeness and performance,\u201d Journal of Product Innovation Management, vol. 28, pp. 819\u2013832, 2011.\n\nSarah Myers West, Meredith Whittaker, and Kate Crawford,, \u201cDiscriminating Systems: Gender, Race, and Power in AI,\u201d AI Now Institute, Tech. Rep., 2019. [URL](https:\/\/ainowinstitute.org\/discriminatingsystems.pdf)\n\nSina Fazelpour, Maria De-Arteaga, Diversity in sociotechnical machine learning systems. Big Data & Society. January 2022. doi:10.1177\/20539517221082027\n\nMary L. Cummings and Songpo Li, 2021a. Sources of subjectivity in machine learning models. ACM Journal of Data and Information Quality, 13(2), 1\u20139\n\n\u201cStaffing for Equitable AI: Roles & Responsibilities,\u201d Partnership on Employment & Accessible Technology (PEAT, peatworks.org). Accessed Jan. 6, 2023. [URL](https:\/\/www.peatworks.org\/ai-disability-inclusion-toolkit\/ai-disability-inclusion-resources\/staffing-for-equitable-ai-roles-responsibilities\/)",
"AI Actors":[
"Governance and Oversight",
"AI Design"
],
"Topic":[
"Diversity",
"Interdisciplinarity",
"Governance"
]
},
{
"type":"Govern",
"title":"GOVERN 3.2",
"category":"GOVERN-3",
"description":"Policies and procedures are in place to define and differentiate roles and responsibilities for human-AI configurations and oversight of AI systems.",
"section_about":"Identifying and managing AI risks and impacts are enhanced when a broad set of perspectives and actors across the AI lifecycle, including technical, legal, compliance, social science, and human factors expertise is engaged. AI actors include those who operate, use, or interact with AI systems for downstream tasks, or monitor AI system performance. Effective risk management efforts include:\n\n- clear definitions and differentiation of the various human roles and responsibilities for AI system oversight and governance\n- recognizing and clarifying differences between AI system overseers and those using or interacting with AI systems.",
"section_actions":"- Establish policies and procedures that define and differentiate the various human roles and responsibilities when using, interacting with, or monitoring AI systems.\n- Establish procedures for capturing and tracking risk information related to human-AI configurations and associated outcomes.\n- Establish policies for the development of proficiency standards for AI actors carrying out system operation tasks and system oversight tasks.\n- Establish specified risk management training protocols for AI actors carrying out system operation tasks and system oversight tasks.\n- Establish policies and procedures regarding AI actor roles, and responsibilities for human oversight of deployed systems.\n- Establish policies and procedures defining human-AI configurations (configurations where AI systems are explicitly designated and treated as team members in primarily human teams) in relation to organizational risk tolerances, and associated documentation. \n- Establish policies to enhance the explanation, interpretation, and overall transparency of AI systems.\n- Establish policies for managing risks regarding known difficulties in human-AI configurations, human-AI teaming, and AI system user experience and user interactions (UI\/UX).",
"section_doc":"### Organizations can document the following\n- What type of information is accessible on the design, operations, and limitations of the AI system to external stakeholders, including end users, consumers, regulators, and individuals impacted by use of the AI system?\n- To what extent has the entity documented the appropriate level of human involvement in AI-augmented decision-making?\n- How will the accountable human(s) address changes in accuracy and precision due to either an adversary\u2019s attempts to disrupt the AI or unrelated changes in operational\/business environment, which may impact the accuracy of the AI?\n- To what extent has the entity clarified the roles, responsibilities, and delegated authorities to relevant stakeholders?\n- How does the entity assess whether personnel have the necessary skills, training, resources, and domain knowledge to fulfill their assigned responsibilities?\n\n### AI Transparency Resources\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- Intel.gov: AI Ethics Framework for Intelligence Community - 2020. [URL](https:\/\/www.intelligence.gov\/artificial-intelligence-ethics-framework-for-the-intelligence-community)\n- WEF Companion to the Model AI Governance Framework- 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGIsago.pdf)",
"section_ref":"Madeleine Clare Elish, \"Moral Crumple Zones: Cautionary tales in human-robot interaction,\" Engaging Science, Technology, and Society, Vol. 5, 2019. [URL](https:\/\/estsjournal.org\/index.php\/ests\/article\/view\/260)\n\n\u201cHuman-AI Teaming: State-Of-The-Art and Research Needs,\u201d National Academies of Sciences, Engineering, and Medicine, 2022. [URL](https:\/\/doi.org\/10.17226\/26355)\n\nBen Green, \"The Flaws Of Policies Requiring Human Oversight Of Government Algorithms,\" Computer Law & Security Review 45 (2022). [URL](https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=3921216)\n\nDavid A. Broniatowski. 2021. Psychological Foundations of Explainability and Interpretability in Artificial Intelligence. National Institute of Standards and Technology (NIST) IR 8367. National Institute of Standards and Technology, Gaithersburg, MD. [URL](https:\/\/doi.org\/10.6028\/NIST.IR.8367)\n\nOff. Comptroller Currency, Comptroller\u2019s Handbook: Model Risk Management (Aug. 2021). [URL](https:\/\/www.occ.gov\/publications-and-resources\/publications\/comptrollers-handbook\/files\/model-risk-management\/index-model-risk-management.html)",
"AI Actors":[
"AI Design"
],
"Topic":[
"Human-AI teaming",
"Human oversight",
"Governance"
]
},
{
"type":"Govern",
"title":"GOVERN 4.1",
"category":"GOVERN-4",
"description":"Organizational policies, and practices are in place to foster a critical thinking and safety-first mindset in the design, development, deployment, and uses of AI systems to minimize negative impacts.",
"section_about":"A risk culture and accompanying practices can help organizations effectively triage the most critical risks. Organizations in some industries implement three (or more) \u201clines of defense,\u201d where separate teams are held accountable for different aspects of the system lifecycle, such as development, risk management, and auditing. While a traditional three-lines approach may be impractical for smaller organizations, leadership can commit to cultivating a strong risk culture through other means. For example, \u201ceffective challenge,\u201d is a culture- based practice that encourages critical thinking and questioning of important design and implementation decisions by experts with the authority and stature to make such changes.\n\nRed-teaming is another risk measurement and management approach. This practice consists of adversarial testing of AI systems under stress conditions to seek out failure modes or vulnerabilities in the system. Red-teams are composed of external experts or personnel who are independent from internal AI actors.",
"section_actions":"- Establish policies that require inclusion of oversight functions (legal, compliance, risk management) from the outset of the system design process.\n- Establish policies that promote effective challenge of AI system design, implementation, and deployment decisions, via mechanisms such as the three lines of defense, model audits, or red-teaming \u2013 to minimize workplace risks such as groupthink.\n- Establish policies that incentivize safety-first mindset and general critical thinking and review at an organizational and procedural level.\n- Establish whistleblower protections for insiders who report on perceived serious problems with AI systems.\n- Establish policies to integrate a harm and risk prevention mindset throughout the AI lifecycle.",
"section_doc":"### Organizations can document the following\n- To what extent has the entity documented the AI system\u2019s development, testing methodology, metrics, and performance outcomes?\n- Are organizational information sharing practices widely followed and transparent, such that related past failed designs can be avoided? \n- Are training manuals and other resources for carrying out incident response documented and available? \n- Are processes for operator reporting of incidents and near-misses documented and available?\n\n\n### AI Transparency Resources\n- Datasheets for Datasets. [URL](http:\/\/arxiv.org\/abs\/1803.09010)\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- WEF Model AI Governance Framework Assessment 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGModelAIGovFramework2.pdf)",
"section_ref":"Bd. Governors Fed. Rsrv. Sys., Supervisory Guidance on Model Risk Management, SR Letter 11-7 (Apr. 4, 2011)\n\nPatrick Hall, Navdeep Gill, and Benjamin Cox, \u201cResponsible Machine Learning,\u201d O\u2019Reilly Media, 2020. [URL](https:\/\/www.oreilly.com\/library\/view\/responsible-machine-learning\/9781492090878\/)\n\nOff. Superintendent Fin. Inst. Canada, Enterprise-Wide Model Risk Management for Deposit-Taking Institutions, E-23 (Sept. 2017).\n\nGAO, \u201cArtificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities,\u201d GAO@100 (GAO-21-519SP), June 2021. [URL](https:\/\/www.gao.gov\/assets\/gao-21-519sp.pdf)\n\nDonald Sull, Stefano Turconi, and Charles Sull, \u201cWhen It Comes to Culture, Does Your Company Walk the Talk?\u201d MIT Sloan Mgmt. Rev., 2020. [URL](https:\/\/sloanreview.mit.edu\/article\/when-it-comes-to-culture-does-your-company-walk-the-talk)\n\nKathy Baxter, AI Ethics Maturity Model, Salesforce. [URL](https:\/\/www.salesforceairesearch.com\/static\/ethics\/EthicalAIMaturityModel.pdf)",
"AI Actors":[
"AI Design",
"AI Development",
"AI Deployment",
"Operation and Monitoring"
],
"Topic":[
"Risk Culture",
"Governance",
"Adversarial"
]
},
{
"type":"Govern",
"title":"GOVERN 4.2",
"category":"GOVERN-4",
"description":"Organizational teams document the risks and potential impacts of the AI technology they design, develop, deploy, evaluate and use, and communicate about the impacts more broadly.",
"section_about":"Impact assessments are one approach for driving responsible technology development practices. And, within a specific use case, these assessments can provide a high-level structure for organizations to frame risks of a given algorithm or deployment. Impact assessments can also serve as a mechanism for organizations to articulate risks and generate documentation for managing and oversight activities when harms do arise.\n\nImpact assessments may:\n\n- be applied at the beginning of a process but also iteratively and regularly since goals and outcomes can evolve over time. \n- include perspectives from AI actors, including operators, users, and potentially impacted communities (including historically marginalized communities, those with disabilities, and individuals impacted by the digital divide), \n- assist in \u201cgo\/no-go\u201d decisions for an AI system. \n- consider conflicts of interest, or undue influence, related to the organizational team being assessed.\n\nSee the MAP function playbook guidance for more information relating to impact assessments.",
"section_actions":"- Establish impact assessment policies and processes for AI systems used by the organization.\n- Align organizational impact assessment activities with relevant regulatory or legal requirements. \n- Verify that impact assessment activities are appropriate to evaluate the potential negative impact of a system and how quickly a system changes, and that assessments are applied on a regular basis.\n- Utilize impact assessments to inform broader evaluations of AI system risk.",
"section_doc":"### Organizations can document the following\n- How has the entity identified and mitigated potential impacts of bias in the data, including inequitable or discriminatory outcomes?\n- How has the entity documented the AI system\u2019s data provenance, including sources, origins, transformations, augmentations, labels, dependencies, constraints, and metadata?\n- To what extent has the entity clearly defined technical specifications and requirements for the AI system?\n- To what extent has the entity documented and communicated the AI system\u2019s development, testing methodology, metrics, and performance outcomes?\n- Have you documented and explained that machine errors may differ from human errors?\n\n### AI Transparency Resources\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- Datasheets for Datasets. [URL](http:\/\/arxiv.org\/abs\/1803.09010)",
"section_ref":"Dillon Reisman, Jason Schultz, Kate Crawford, Meredith Whittaker, \u201cAlgorithmic Impact Assessments: A Practical Framework For Public Agency Accountability,\u201d AI Now Institute, 2018. [URL](https:\/\/ainowinstitute.org\/aiareport2018.pdf)\n\nH.R. 2231, 116th Cong. (2019). [URL](https:\/\/www.congress.gov\/bill\/116th-congress\/house-bill\/2231\/text)\n\nBSA The Software Alliance (2021) Confronting Bias: BSA\u2019s Framework to Build Trust in AI. [URL](https:\/\/www.bsa.org\/reports\/confronting-bias-bsas-framework-to-build-trust-in-ai)\n\nAnthony M. Barrett, Dan Hendrycks, Jessica Newman and Brandie Nonnecke. Actionable Guidance for High-Consequence AI Risk Management: Towards Standards Addressing AI Catastrophic Risks. ArXiv abs\/2206.08966 (2022) https:\/\/arxiv.org\/abs\/2206.08966\n\nDavid Wright, \u201cMaking Privacy Impact Assessments More Effective.\" The Information Society 29, 2013. [URL](https:\/\/iapp.org\/media\/pdf\/knowledge_center\/Making_PIA__more_effective.pdf)\n\nKonstantinia Charitoudi and Andrew Blyth. A Socio-Technical Approach to Cyber Risk Management and Impact Assessment. Journal of Information Security 4, 1 (2013), 33-41. [URL](https:\/\/www.scirp.org\/pdf\/JIS_2013013014352043.pdf)\n\nEmanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, Madeleine Clare Elish, & Jacob Metcalf. 2021. \u201cAssembling Accountability: Algorithmic Impact Assessment for the Public Interest\u201d. [URL](https:\/\/datasociety.net\/library\/assembling-accountability-algorithmic-impact-assessment-for-the-public-interest\/)\n\nMicrosoft. Responsible AI Impact Assessment Template. 2022. [URL](https:\/\/blogs.microsoft.com\/wp-content\/uploads\/prod\/sites\/5\/2022\/06\/Microsoft-RAI-Impact-Assessment-Template.pdf)\n\nMicrosoft. Responsible AI Impact Assessment Guide. 2022. [URL](https:\/\/blogs.microsoft.com\/wp-content\/uploads\/prod\/sites\/5\/2022\/06\/Microsoft-RAI-Impact-Assessment-Guide.pdf)\n\nMicrosoft. Foundations of assessing harm. 2022. [URL](https:\/\/opdhsblobprod04.blob.core.windows.net\/contents\/f4438a49b5d04a4b93b0fa1f989369cf\/8db74d210fb2fc34b7d6981ed0545adc?skoid=2d004ef0-5468-4cd8-a5b7-14c04c6415bc&sktid=975f013f-7f24-47e8-a7d3-abc4752bf346&skt=2023-01-15T14%3A46%3A07Z&ske=2023-01-22T14%3A51%3A07Z&sks=b&skv=2021-10-04&sv=2021-10-04&se=2023-01-21T05%3A44%3A16Z&sr=b&sp=r&sig=zr00zgBC8dJFXCJB%2BrZkY%2BHse1Y2g886cE9zqO7yvMg%3D)\n\nMauritz Kop, \u201cAI Impact Assessment & Code of Conduct,\u201d Futurium, May 2019. [URL](https:\/\/futurium.ec.europa.eu\/en\/european-ai-alliance\/best-practices\/ai-impact-assessment-code-conduct)\n\nDillon Reisman, Jason Schultz, Kate Crawford, and Meredith Whittaker, \u201cAlgorithmic Impact Assessments: A Practical Framework For Public Agency Accountability,\u201d AI Now, Apr. 2018. [URL](https:\/\/ainowinstitute.org\/aiareport2018.pdf)\n\nAndrew D. Selbst, \u201cAn Institutional View Of Algorithmic Impact Assessments,\u201d Harvard Journal of Law & Technology, vol. 35, no. 1, 2021\n\nAda Lovelace Institute. 2022. Algorithmic Impact Assessment: A Case Study in Healthcare. Accessed July 14, 2022. [URL](https:\/\/www.adalovelaceinstitute.org\/report\/algorithmic-impact-assessment-case-study-healthcare\/)\n\nKathy Baxter, AI Ethics Maturity Model, Salesforce [URL](https:\/\/www.salesforceairesearch.com\/static\/ethics\/EthicalAIMaturityModel.pdf)",
"AI Actors":[
"AI Design",
"AI Development",
"AI Deployment",
"Operation and Monitoring"
],
"Topic":[
"Risk Culture",
"Governance",
"Impact Assessment"
]
},
{
"type":"Govern",
"title":"GOVERN 4.3",
"category":"GOVERN-4",
"description":"Organizational practices are in place to enable AI testing, identification of incidents, and information sharing.",
"section_about":"Identifying AI system limitations, detecting and tracking negative impacts and incidents, and sharing information about these issues with appropriate AI actors will improve risk management. Issues such as concept drift, AI bias and discrimination, shortcut learning or underspecification are difficult to identify using current standard AI testing processes. Organizations can institute in-house use and testing policies and procedures to identify and manage such issues. Efforts can take the form of pre-alpha or pre-beta testing, or deploying internally developed systems or products within the organization. Testing may entail limited and controlled in-house, or publicly available, AI system testbeds, and accessibility of AI system interfaces and outputs.\n\nWithout policies and procedures that enable consistent testing practices, risk management efforts may be bypassed or ignored, exacerbating risks or leading to inconsistent risk management activities.\n\nInformation sharing about impacts or incidents detected during testing or deployment can:\n\n* draw attention to AI system risks, failures, abuses or misuses, \n* allow organizations to benefit from insights based on a wide range of AI applications and implementations, and \n* allow organizations to be more proactive in avoiding known failure modes.\n\nOrganizations may consider sharing incident information with the AI Incident Database, the AIAAIC, users, impacted communities, or with traditional cyber vulnerability databases, such as the MITRE CVE list.",
"section_actions":"- Establish policies and procedures to facilitate and equip AI system testing.\n- Establish organizational commitment to identifying AI system limitations and sharing of insights about limitations within appropriate AI actor groups.\n- Establish policies for reporting and documenting incident response.\n- Establish policies and processes regarding public disclosure of incidents and information sharing.\n- Establish guidelines for incident handling related to AI system risks and performance.",
"section_doc":"### Organizations can document the following\n- Did your organization address usability problems and test whether user interfaces served their intended purposes? Consulting the community or end users at the earliest stages of development to ensure there is transparency on the technology used and how it is deployed.\n- Did your organization implement a risk management system to address risks involved in deploying the identified AI solution (e.g. personnel risk or changes to commercial objectives)?\n- To what extent can users or parties affected by the outputs of the AI system test the AI system and provide feedback?\n\n### AI Transparency Resources\n- WEF Model AI Governance Framework Assessment 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGModelAIGovFramework2.pdf)\n- WEF Companion to the Model AI Governance Framework- 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGIsago.pdf)",
"section_ref":"Sean McGregor, \u201cPreventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database,\u201d arXiv:2011.08512 [cs], Nov. 2020, arXiv:2011.08512. [URL](http:\/\/arxiv.org\/abs\/2011.08512)\n\nChristopher Johnson, Mark Badger, David Waltermire, Julie Snyder, and Clem Skorupka, \u201cGuide to cyber threat information sharing,\u201d National Institute of Standards and Technology, NIST Special Publication 800-150, Nov 2016. [URL](https:\/\/doi.org\/10.6028\/NIST.SP.800-150)\n\nMengyi Wei, Zhixuan Zhou (2022). AI Ethics Issues in Real World: Evidence from AI Incident Database. ArXiv, abs\/2206.07635. [URL](https:\/\/arxiv.org\/pdf\/2206.07635.pdf)\n\nBSA The Software Alliance (2021) Confronting Bias: BSA\u2019s Framework to Build Trust in AI. [URL](https:\/\/www.bsa.org\/reports\/confronting-bias-bsas-framework-to-build-trust-in-ai)\n\n\u201cUsing Combined Expertise to Evaluate Web Accessibility,\u201d W3C Web Accessibility Initiative. [URL](https:\/\/www.w3.org\/WAI\/test-evaluate\/combined-expertise\/)",
"AI Actors":[
"TEVV",
"Operation and Monitoring",
"Governance and Oversight",
"Fairness and Bias"
],
"Topic":[
"Risk Culture",
"Governance",
"AI Incidents",
"Impact Assessment",
"Drift",
"Fairness and Bias"
]
},
{
"type":"Govern",
"title":"GOVERN 5.1",
"category":"GOVERN-5",
"description":"Organizational policies and practices are in place to collect, consider, prioritize, and integrate feedback from those external to the team that developed or deployed the AI system regarding the potential individual and societal impacts related to AI risks.",
"section_about":"Beyond internal and laboratory-based system testing, organizational policies and practices may consider AI system fitness-for-purpose related to the intended context of use.\n\nParticipatory stakeholder engagement is one type of qualitative activity to help AI actors answer questions such as whether to pursue a project or how to design with impact in mind. This type of feedback, with domain expert input, can also assist AI actors to identify emergent scenarios and risks in certain AI applications. The consideration of when and how to convene a group and the kinds of individuals, groups, or community organizations to include is an iterative process connected to the system's purpose and its level of risk. Other factors relate to how to collaboratively and respectfully capture stakeholder feedback and insight that is useful, without being a solely perfunctory exercise.\n\nThese activities are best carried out by personnel with expertise in participatory practices, qualitative methods, and translation of contextual feedback for technical audiences.\n\nParticipatory engagement is not a one-time exercise and is best carried out from the very beginning of AI system commissioning through the end of the lifecycle. Organizations can consider how to incorporate engagement when beginning a project and as part of their monitoring of systems. Engagement is often utilized as a consultative practice, but this perspective may inadvertently lead to \u201cparticipation washing.\u201d Organizational transparency about the purpose and goal of the engagement can help mitigate that possibility.\n\nOrganizations may also consider targeted consultation with subject matter experts as a complement to participatory findings. Experts may assist internal staff in identifying and conceptualizing potential negative impacts that were previously not considered.",
"section_actions":"- Establish AI risk management policies that explicitly address mechanisms for collecting, evaluating, and incorporating stakeholder and user feedback that could include:\n - Recourse mechanisms for faulty AI system outputs.\n - Bug bounties.\n - Human-centered design.\n - User-interaction and experience research.\n - Participatory stakeholder engagement with individuals and communities that may experience negative impacts.\n- Verify that stakeholder feedback is considered and addressed, including environmental concerns, and across the entire population of intended users, including historically excluded populations, people with disabilities, older people, and those with limited access to the internet and other basic technologies.\n- Clarify the organization\u2019s principles as they apply to AI systems \u2013 considering those which have been proposed publicly \u2013 to inform external stakeholders of the organization\u2019s values. Consider publishing or adopting AI principles.",
"section_doc":"### Organizations can document the following \n- What type of information is accessible on the design, operations, and limitations of the AI system to external stakeholders, including end users, consumers, regulators, and individuals impacted by use of the AI system?\n- To what extent has the entity clarified the roles, responsibilities, and delegated authorities to relevant stakeholders?\n- How easily accessible and current is the information available to external stakeholders?\n- What was done to mitigate or reduce the potential for harm?\n- Stakeholder involvement: Include diverse perspectives from a community of stakeholders throughout the AI life cycle to mitigate risks.\n\n### AI Transparency Resources\n- Datasheets for Datasets. [URL](http:\/\/arxiv.org\/abs\/1803.09010)\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- AI policies and initiatives, in Artificial Intelligence in Society, OECD, 2019. [URL](https:\/\/www.oecd.org\/publications\/artificial-intelligence-in-society-eedfee77-en.htm)\n- Stakeholders in Explainable AI, Sep. 2018. [URL](http:\/\/arxiv.org\/abs\/1810.00184)",
"section_ref":"ISO, \u201cErgonomics of human-system interaction \u2014 Part 210: Human-centered design for interactive systems,\u201d ISO 9241-210:2019 (2nd ed.), July 2019. [URL](https:\/\/www.iso.org\/standard\/77520.html)\n\nRumman Chowdhury and Jutta Williams, \"Introducing Twitter\u2019s first algorithmic bias bounty challenge,\" [URL](https:\/\/blog.twitter.com\/engineering\/en_us\/topics\/insights\/2021\/algorithmic-bias-bounty-challenge)\n\nLeonard Haas and Sebastian Gie\u00dfler, \u201cIn the realm of paper tigers \u2013 exploring the failings of AI ethics guidelines,\u201d AlgorithmWatch, 2020. [URL](https:\/\/algorithmwatch.org\/en\/ai-ethics-guidelines-inventory-upgrade-2020\/)\n\nJosh Kenway, Camille Francois, Dr. Sasha Costanza-Chock, Inioluwa Deborah Raji, & Dr. Joy Buolamwini. 2022. Bug Bounties for Algorithmic Harms? Algorithmic Justice League. Accessed July 14, 2022. [URL](https:\/\/www.ajl.org\/bugs)\n\nMicrosoft Community Jury , Azure Application Architecture Guide. [URL](https:\/\/docs.microsoft.com\/en-us\/azure\/architecture\/guide\/responsible-innovation\/community-jury\/)\n\n\u201cDefinition of independent verification and validation (IV&V)\u201d, in IEEE 1012, IEEE Standard for System, Software, and Hardware Verification and Validation. Annex C, [URL](https:\/\/people.eecs.ku.edu\/~hossein\/Teaching\/Stds\/1012.pdf)",
"AI Actors":[
"AI Design",
"Governance and Oversight",
"AI Impact Assessment",
"Affected Individuals and Communities"
],
"Topic":[
"Participation",
"Governance",
"Impact Assessment"
]
},
{
"type":"Govern",
"title":"GOVERN 5.2",
"category":"GOVERN-5",
"description":"Mechanisms are established to enable AI actors to regularly incorporate adjudicated feedback from relevant AI actors into system design and implementation.",
"section_about":"Organizational policies and procedures that equip AI actors with the processes, knowledge, and expertise needed to inform collaborative decisions about system deployment improve risk management. These decisions are closely tied to AI systems and organizational risk tolerance.\n\nRisk tolerance, established by organizational leadership, reflects the level and type of risk the organization will accept while conducting its mission and carrying out its strategy. When risks arise, resources are allocated based on the assessed risk of a given AI system. Organizations typically apply a risk tolerance approach where higher risk systems receive larger allocations of risk management resources and lower risk systems receive less resources.",
"section_actions":"- Explicitly acknowledge that AI systems, and the use of AI, present inherent costs and risks along with potential benefits.\n- Define reasonable risk tolerances for AI systems informed by laws, regulation, best practices, or industry standards.\n- Establish policies that ensure all relevant AI actors are provided with meaningful opportunities to provide feedback on system design and implementation.\n- Establish policies that define how to assign AI systems to established risk tolerance levels by combining system impact assessments with the likelihood that an impact occurs. Such assessment often entails some combination of:\n - Econometric evaluations of impacts and impact likelihoods to assess AI system risk.\n - Red-amber-green (RAG) scales for impact severity and likelihood to assess AI system risk.\n - Establishment of policies for allocating risk management resources along established risk tolerance levels, with higher-risk systems receiving more risk management resources and oversight.\n - Establishment of policies for approval, conditional approval, and disapproval of the design, implementation, and deployment of AI systems.\n- Establish policies facilitating the early decommissioning of AI systems that surpass an organization\u2019s ability to reasonably mitigate risks.",
"section_doc":"### Organizations can document the following\n- Who is ultimately responsible for the decisions of the AI and is this person aware of the intended uses and limitations of the analytic?\n- Who will be responsible for maintaining, re-verifying, monitoring, and updating this AI once deployed?\n- Who is accountable for the ethical considerations during all stages of the AI lifecycle?\n- To what extent are the established procedures effective in mitigating bias, inequity, and other concerns resulting from the system?\n- Does the AI solution provide sufficient information to assist the personnel to make an informed decision and take actions accordingly?\n\n### AI Transparency Resources\n- WEF Model AI Governance Framework Assessment 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGModelAIGovFramework2.pdf)\n- WEF Companion to the Model AI Governance Framework- 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGIsago.pdf)\n- Stakeholders in Explainable AI, Sep. 2018. [URL](http:\/\/arxiv.org\/abs\/1810.00184)\n- AI policies and initiatives, in Artificial Intelligence in Society, OECD, 2019. [URL](https:\/\/www.oecd.org\/publications\/artificial-intelligence-in-society-eedfee77-en.htm)",
"section_ref":"Bd. Governors Fed. Rsrv. Sys., Supervisory Guidance on Model Risk Management, SR Letter 11-7 (Apr. 4, 2011)\n\nOff. Comptroller Currency, Comptroller\u2019s Handbook: Model Risk Management (Aug. 2021). [URL](https:\/\/www.occ.gov\/publications-and-resources\/publications\/comptrollers-handbook\/files\/model-risk-management\/index-model-risk-management.html)\n\nThe Office of the Comptroller of the Currency. Enterprise Risk Appetite Statement. (Nov. 20, 2019). Retrieved on July 12, 2022. [URL](https:\/\/www.occ.treas.gov\/publications-and-resources\/publications\/banker-education\/files\/pub-risk-appetite-statement.pdf)",
"AI Actors":[
"AI Impact Assessment",
"Governance and Oversight",
"Operation and Monitoring"
],
"Topic":[
"Participation",
"Governance",
"Impact Assessment"
]
},
{
"type":"Govern",
"title":"GOVERN 6.1",
"category":"GOVERN-6",
"description":"Policies and procedures are in place that address AI risks associated with third-party entities, including risks of infringement of a third party\u2019s intellectual property or other rights.",
"section_about":"Risk measurement and management can be complicated by how customers use or integrate third-party data or systems into AI products or services, particularly without sufficient internal governance structures and technical safeguards. \n\nOrganizations usually engage multiple third parties for external expertise, data, software packages (both open source and commercial), and software and hardware platforms across the AI lifecycle. This engagement has beneficial uses and can increase complexities of risk management efforts.\n\nOrganizational approaches to managing third-party (positive and negative) risk may be tailored to the resources, risk profile, and use case for each system. Organizations can apply governance approaches to third-party AI systems and data as they would for internal resources \u2014 including open source software, publicly available data, and commercially available models.",
"section_actions":"- Collaboratively establish policies that address third-party AI systems and data.\n- Establish policies related to:\n - Transparency into third-party system functions, including knowledge about training data, training and inference algorithms, and assumptions and limitations.\n - Thorough testing of third-party AI systems. (See MEASURE for more detail)\n - Requirements for clear and complete instructions for third-party system usage.\n- Evaluate policies for third-party technology. \n- Establish policies that address supply chain, full product lifecycle and associated processes, including legal, ethical, and other issues concerning procurement and use of third-party software or hardware systems and data.",
"section_doc":"### Organizations can document the following\n- Did you establish mechanisms that facilitate the AI system\u2019s auditability (e.g. traceability of the development process, the sourcing of training data and the logging of the AI system\u2019s processes, outcomes, positive and negative impact)?\n- If a third party created the AI, how will you ensure a level of explainability or interpretability?\n- Did you ensure that the AI system can be audited by independent third parties?\n- Did you establish a process for third parties (e.g. suppliers, end users, subjects, distributors\/vendors or workers) to report potential vulnerabilities, risks or biases in the AI system?\n- To what extent does the plan specifically address risks associated with acquisition, procurement of packaged software from vendors, cybersecurity controls, computational infrastructure, data, data science, deployment mechanics, and system failure?\n\n### AI Transparency Resources\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- Intel.gov: AI Ethics Framework for Intelligence Community - 2020. [URL](https:\/\/www.intelligence.gov\/artificial-intelligence-ethics-framework-for-the-intelligence-community)\n- WEF Model AI Governance Framework Assessment 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGModelAIGovFramework2.pdf)\n- WEF Companion to the Model AI Governance Framework- 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGIsago.pdf)\n- AI policies and initiatives, in Artificial Intelligence in Society, OECD, 2019. [URL](https:\/\/www.oecd.org\/publications\/artificial-intelligence-in-society-eedfee77-en.htm)\n- Assessment List for Trustworthy AI (ALTAI) - The High-Level Expert Group on AI - 2019. [URL](https:\/\/digital-strategy.ec.europa.eu\/en\/library\/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment)",
"section_ref":"Bd. Governors Fed. Rsrv. Sys., Supervisory Guidance on Model Risk Management, SR Letter 11-7 (Apr. 4, 2011)\n\n\u201cProposed Interagency Guidance on Third-Party Relationships: Risk Management,\u201d 2021. [URL](https:\/\/www.occ.gov\/news-issuances\/news-releases\/2021\/nr-occ-2021-74a.pdf)\n\nOff. Comptroller Currency, Comptroller\u2019s Handbook: Model Risk Management (Aug. 2021). [URL](https:\/\/www.occ.gov\/publications-and-resources\/publications\/comptrollers-handbook\/files\/model-risk-management\/index-model-risk-management.html)",
"AI Actors":[
"Third-party entities",
"Operation and Monitoring",
"Procurement"
],
"Topic":[
"Third-party",
"Legal and Regulatory",
"Procurement",
"Supply Chain",
"Governance"
]
},
{
"type":"Govern",
"title":"GOVERN 6.2",
"category":"GOVERN-6",
"description":"Contingency processes are in place to handle failures or incidents in third-party data or AI systems deemed to be high-risk.",
"section_about":"To mitigate the potential harms of third-party system failures, organizations may implement policies and procedures that include redundancies for covering third-party functions.",
"section_actions":"- Establish policies for handling third-party system failures to include consideration of redundancy mechanisms for vital third-party AI systems.\n- Verify that incident response plans address third-party AI systems.",
"section_doc":"### Organizations can document the following\n- To what extent does the plan specifically address risks associated with acquisition, procurement of packaged software from vendors, cybersecurity controls, computational infrastructure, data, data science, deployment mechanics, and system failure?\n- Did you establish a process for third parties (e.g. suppliers, end users, subjects, distributors\/vendors or workers) to report potential vulnerabilities, risks or biases in the AI system?\n- If your organization obtained datasets from a third party, did your organization assess and manage the risks of using such datasets?\n\n### AI Transparency Resources\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- WEF Model AI Governance Framework Assessment 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGModelAIGovFramework2.pdf)\n- WEF Companion to the Model AI Governance Framework- 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGIsago.pdf)\n- AI policies and initiatives, in Artificial Intelligence in Society, OECD, 2019. [URL](https:\/\/www.oecd.org\/publications\/artificial-intelligence-in-society-eedfee77-en.htm)",
"section_ref":"Bd. Governors Fed. Rsrv. Sys., Supervisory Guidance on Model Risk Management, SR Letter 11-7 (Apr. 4, 2011)\n\n\u201cProposed Interagency Guidance on Third-Party Relationships: Risk Management,\u201d 2021. [URL](https:\/\/www.occ.gov\/news-issuances\/news-releases\/2021\/nr-occ-2021-74a.pdf)\n\nOff. Comptroller Currency, Comptroller\u2019s Handbook: Model Risk Management (Aug. 2021). [URL](https:\/\/www.occ.gov\/publications-and-resources\/publications\/comptrollers-handbook\/files\/model-risk-management\/index-model-risk-management.html)",
"AI Actors":[
"AI Deployment",
"TEVV",
"Operation and Monitoring",
"Third-party entities"
],
"Topic":[
"Third-party",
"Governance",
"Risk Management",
"Supply Chain"
]
},
{
"type":"Manage",
"title":"MANAGE 1.1",
"category":"MANAGE-1",
"description":"A determination is made as to whether the AI system achieves its intended purpose and stated objectives and whether its development or deployment should proceed.",
"section_about":"AI systems may not necessarily be the right solution for a given business task or problem. A standard risk management practice is to formally weigh an AI system\u2019s negative risks against its benefits, and to determine if the AI system is an appropriate solution. Tradeoffs among trustworthiness characteristics \u2014such as deciding to deploy a system based on system performance vs system transparency\u2013may require regular assessment throughout the AI lifecycle.",
"section_actions":"- Consider trustworthiness characteristics when evaluating AI systems\u2019 negative risks and benefits.\n- Utilize TEVV outputs from map and measure functions when considering risk treatment.\n- Regularly track and monitor negative risks and benefits throughout the AI system lifecycle including in post-deployment monitoring.\n- Regularly assess and document system performance relative to trustworthiness characteristics and tradeoffs between negative risks and opportunities.\n- Evaluate tradeoffs in connection with real-world use cases and impacts and as enumerated in Map function outcomes.",
"section_doc":"### Organizations can document the following\n\n- How do the technical specifications and requirements align with the AI system\u2019s goals and objectives?\n- To what extent are the metrics consistent with system goals, objectives, and constraints, including ethical and compliance considerations?\n- What goals and objectives does the entity expect to achieve by designing, developing, and\/or deploying the AI system?\n\n### AI Transparency Resources\n\n- GAO-21-519SP - Artificial Intelligence: An Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- Artificial Intelligence Ethics Framework For The Intelligence Community. [URL](https:\/\/www.intelligence.gov\/artificial-intelligence-ethics-framework-for-the-intelligence-community) \n- WEF Companion to the Model AI Governance Framework \u2013 Implementation and Self-Assessment Guide for Organizations [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/files\/pdpc\/pdf-files\/resource-for-organisation\/ai\/sgisago.ashx)",
"section_ref":"Arvind Narayanan. How to recognize AI snake oil. Retrieved October 15, 2022. [URL](https:\/\/www.cs.princeton.edu\/~arvindn\/talks\/MIT-STS-AI-snakeoil.pdf)\n\nBoard of Governors of the Federal Reserve System. SR 11-7: Guidance on Model Risk Management. (April 4, 2011). [URL](https:\/\/www.federalreserve.gov\/supervisionreg\/srletters\/sr1107.htm)\n\nEmanuel Moss, Elizabeth Watkins, Ranjit Singh, Madeleine Clare Elish, Jacob Metcalf. 2021. Assembling Accountability: Algorithmic Impact Assessment for the Public Interest. (June 29, 2021). [URL](https:\/\/ssrn.com\/abstract=3877437 or http:\/\/dx.doi.org\/10.2139\/ssrn.3877437)\n\nFraser, Henry L and Bello y Villarino, Jose-Miguel, Where Residual Risks Reside: A Comparative Approach to Art 9(4) of the European Union's Proposed AI Regulation (September 30, 2021). [LINK](https:\/\/ssrn.com\/abstract=3960461), [URL](http:\/\/dx.doi.org\/10.2139\/ssrn.3960461)\n\nMicrosoft. 2022. Microsoft Responsible AI Impact Assessment Template. (June 2022). [URL](https:\/\/blogs.microsoft.com\/wp-content\/uploads\/prod\/sites\/5\/2022\/06\/Microsoft-RAI-Impact-Assessment-Template.pdf)\n\nOffice of the Comptroller of the Currency. 2021. Comptroller's Handbook: Model Risk Management, Version 1.0, August 2021. [URL](https:\/\/www.occ.gov\/publications-and-resources\/publications\/comptrollers-handbook\/files\/model-risk-management\/index-model-risk-management.html)\n\nSolon Barocas, Asia J. Biega, Benjamin Fish, et al. 2020. When not to design, build, or deploy. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* '20). Association for Computing Machinery, New York, NY, USA, 695. [URL](https:\/\/doi.org\/10.1145\/3351095.3375691)",
"AI Actors":[
"AI Deployment",
"Operation and Monitoring",
"AI Impact Assessment"
],
"Topic":[
"AI Deployment",
"Risk Assessment"
]
},
{
"type":"Manage",
"title":"MANAGE 1.2",
"category":"MANAGE-1",
"description":"Treatment of documented AI risks is prioritized based on impact, likelihood, or available resources or methods.",
"section_about":"Risk refers to the composite measure of an event\u2019s probability of occurring and the magnitude (or degree) of the consequences of the corresponding events. The impacts, or consequences, of AI systems can be positive, negative, or both and can result in opportunities or risks. \n\nOrganizational risk tolerances are often informed by several internal and external factors, including existing industry practices, organizational values, and legal or regulatory requirements. Since risk management resources are often limited, organizations usually assign them based on risk tolerance. AI risks that are deemed more serious receive more oversight attention and risk management resources.",
"section_actions":"- Assign risk management resources relative to established risk tolerance. AI systems with lower risk tolerances receive greater oversight, mitigation and management resources. \n- Document AI risk tolerance determination practices and resource decisions.\n- Regularly review risk tolerances and re-calibrate, as needed, in accordance with information from AI system monitoring and assessment .",
"section_doc":"### Organizations can document the following\n\n- Did your organization implement a risk management system to address risks involved in deploying the identified AI solution (e.g. personnel risk or changes to commercial objectives)?\n- What assessments has the entity conducted on data security and privacy impacts associated with the AI system?\n- Does your organization have an existing governance structure that can be leveraged to oversee the organization\u2019s use of AI?\n\n### AI Transparency Resources\n\n- WEF Companion to the Model AI Governance Framework \u2013 Implementation and Self-Assessment Guide for Organizations [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/files\/pdpc\/pdf-files\/resource-for-organisation\/ai\/sgisago.ashx)\n- GAO-21-519SP - Artificial Intelligence: An Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)",
"section_ref":"Arvind Narayanan. How to recognize AI snake oil. Retrieved October 15, 2022. [URL](https:\/\/www.cs.princeton.edu\/~arvindn\/talks\/MIT-STS-AI-snakeoil.pdf)\n\nBoard of Governors of the Federal Reserve System. SR 11-7: Guidance on Model Risk Management. (April 4, 2011). [URL](https:\/\/www.federalreserve.gov\/supervisionreg\/srletters\/sr1107.htm)\n\nEmanuel Moss, Elizabeth Watkins, Ranjit Singh, Madeleine Clare Elish, Jacob Metcalf. 2021. Assembling Accountability: Algorithmic Impact Assessment for the Public Interest. (June 29, 2021). [URL](https:\/\/ssrn.com\/abstract=3877437 or http:\/\/dx.doi.org\/10.2139\/ssrn.3877437)\n\nFraser, Henry L and Bello y Villarino, Jose-Miguel, Where Residual Risks Reside: A Comparative Approach to Art 9(4) of the European Union's Proposed AI Regulation (September 30, 2021). [LINK](https:\/\/ssrn.com\/abstract=3960461), [URL](http:\/\/dx.doi.org\/10.2139\/ssrn.3960461)\n\nMicrosoft. 2022. Microsoft Responsible AI Impact Assessment Template. (June 2022). [URL](https:\/\/blogs.microsoft.com\/wp-content\/uploads\/prod\/sites\/5\/2022\/06\/Microsoft-RAI-Impact-Assessment-Template.pdf)\n\nOffice of the Comptroller of the Currency. 2021. Comptroller's Handbook: Model Risk Management, Version 1.0, August 2021. [URL](https:\/\/www.occ.gov\/publications-and-resources\/publications\/comptrollers-handbook\/files\/model-risk-management\/index-model-risk-management.html)\n\nSolon Barocas, Asia J. Biega, Benjamin Fish, et al. 2020. When not to design, build, or deploy. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* '20). Association for Computing Machinery, New York, NY, USA, 695. [URL](https:\/\/doi.org\/10.1145\/3351095.3375691)",
"AI Actors":[
"AI Deployment",
"Operation and Monitoring",
"AI Impact Assessment"
],
"Topic":[
"Risk Tolerance"
]
},
{
"type":"Manage",
"title":"MANAGE 1.3",
"category":"MANAGE-1",
"description":"Responses to the AI risks deemed high priority as identified by the Map function, are developed, planned, and documented. Risk response options can include mitigating, transferring, avoiding, or accepting.",
"section_about":"Outcomes from GOVERN-1, MAP-5 and MEASURE-2, can be used to address and document identified risks based on established risk tolerances. Organizations can follow existing regulations and guidelines for risk criteria, tolerances and responses established by organizational, domain, discipline, sector, or professional requirements. In lieu of such guidance, organizations can develop risk response plans based on strategies such as accepted model risk management, enterprise risk management, and information sharing and disclosure practices.",
"section_actions":"- Observe regulatory and established organizational, sector, discipline, or professional standards and requirements for applying risk tolerances within the organization.\n- Document procedures for acting on AI system risks related to trustworthiness characteristics.\n- Prioritize risks involving physical safety, legal liabilities, regulatory compliance, and negative impacts on individuals, groups, or society.\n- Identify risk response plans and resources and organizational teams for carrying out response functions.\n- Store risk management and system documentation in an organized, secure repository that is accessible by relevant AI Actors and appropriate personnel.",
"section_doc":"### Organizations can document the following\n\n- Has the system been reviewed to ensure the AI system complies with relevant laws, regulations, standards, and guidance?\n- To what extent has the entity defined and documented the regulatory environment\u2014including minimum requirements in laws and regulations?\n- Did your organization implement a risk management system to address risks involved in deploying the identified AI solution (e.g. personnel risk or changes to commercial objectives)?\n\n### AI Transparency Resources\n\n- GAO-21-519SP - Artificial Intelligence: An Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- Datasheets for Datasets. [URL](https:\/\/arxiv.org\/abs\/1803.09010)",
"section_ref":"Arvind Narayanan. How to recognize AI snake oil. Retrieved October 15, 2022. [URL](https:\/\/www.cs.princeton.edu\/~arvindn\/talks\/MIT-STS-AI-snakeoil.pdf)\n\nBoard of Governors of the Federal Reserve System. SR 11-7: Guidance on Model Risk Management. (April 4, 2011). [URL](https:\/\/www.federalreserve.gov\/supervisionreg\/srletters\/sr1107.htm)\n\nEmanuel Moss, Elizabeth Watkins, Ranjit Singh, Madeleine Clare Elish, Jacob Metcalf. 2021. Assembling Accountability: Algorithmic Impact Assessment for the Public Interest. (June 29, 2021). [URL](https:\/\/ssrn.com\/abstract=3877437 or http:\/\/dx.doi.org\/10.2139\/ssrn.3877437)\n\nFraser, Henry L and Bello y Villarino, Jose-Miguel, Where Residual Risks Reside: A Comparative Approach to Art 9(4) of the European Union's Proposed AI Regulation (September 30, 2021). [LINK](https:\/\/ssrn.com\/abstract=3960461), [URL](http:\/\/dx.doi.org\/10.2139\/ssrn.3960461)\n\nMicrosoft. 2022. Microsoft Responsible AI Impact Assessment Template. (June 2022). [URL](https:\/\/blogs.microsoft.com\/wp-content\/uploads\/prod\/sites\/5\/2022\/06\/Microsoft-RAI-Impact-Assessment-Template.pdf)\n\nOffice of the Comptroller of the Currency. 2021. Comptroller's Handbook: Model Risk Management, Version 1.0, August 2021. [URL](https:\/\/www.occ.gov\/publications-and-resources\/publications\/comptrollers-handbook\/files\/model-risk-management\/index-model-risk-management.html)\n\nSolon Barocas, Asia J. Biega, Benjamin Fish, et al. 2020. When not to design, build, or deploy. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* '20). Association for Computing Machinery, New York, NY, USA, 695. [URL](https:\/\/doi.org\/10.1145\/3351095.3375691)",
"AI Actors":[
"AI Deployment",
"Operation and Monitoring",
"AI Impact Assessment"
],
"Topic":[
"Legal and Regulatory",
"Risk Tolerance"
]
},
{
"type":"Manage",
"title":"MANAGE 1.4",
"category":"MANAGE-1",
"description":"Negative residual risks (defined as the sum of all unmitigated risks) to both downstream acquirers of AI systems and end users are documented.",
"section_about":"Organizations may choose to accept or transfer some of the documented risks from MAP and MANAGE 1.3 and 2.1. Such risks, known as residual risk, may affect downstream AI actors such as those engaged in system procurement or use. Transparent monitoring and managing residual risks enables cost benefit analysis and the examination of potential values of AI systems versus its potential negative impacts.",
"section_actions":"- Document residual risks within risk response plans, denoting risks that have been accepted, transferred, or subject to minimal mitigation. \n- Establish procedures for disclosing residual risks to relevant downstream AI actors .\n- Inform relevant downstream AI actors of requirements for safe operation, known limitations, and suggested warning labels as identified in MAP 3.4.",
"section_doc":"### Organizations can document the following\n\n- What are the roles, responsibilities, and delegation of authorities of personnel involved in the design, development, deployment, assessment and monitoring of the AI system?\n- Who will be responsible for maintaining, re-verifying, monitoring, and updating this AI once deployed?\n- How will updates\/revisions be documented and communicated? How often and by whom?\n- How easily accessible and current is the information available to external stakeholders?\n\n### AI Transparency Resources\n\n- GAO-21-519SP - Artificial Intelligence: An Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- Artificial Intelligence Ethics Framework For The Intelligence Community. [URL](https:\/\/www.intelligence.gov\/artificial-intelligence-ethics-framework-for-the-intelligence-community) \n- Datasheets for Datasets. [URL](https:\/\/arxiv.org\/abs\/1803.09010)",
"section_ref":"Arvind Narayanan. How to recognize AI snake oil. Retrieved October 15, 2022. [URL](https:\/\/www.cs.princeton.edu\/~arvindn\/talks\/MIT-STS-AI-snakeoil.pdf)\n\nBoard of Governors of the Federal Reserve System. SR 11-7: Guidance on Model Risk Management. (April 4, 2011). [URL](https:\/\/www.federalreserve.gov\/supervisionreg\/srletters\/sr1107.htm)\n\nEmanuel Moss, Elizabeth Watkins, Ranjit Singh, Madeleine Clare Elish, Jacob Metcalf. 2021. Assembling Accountability: Algorithmic Impact Assessment for the Public Interest. (June 29, 2021). [URL](https:\/\/ssrn.com\/abstract=3877437 or http:\/\/dx.doi.org\/10.2139\/ssrn.3877437)\n\nFraser, Henry L and Bello y Villarino, Jose-Miguel, Where Residual Risks Reside: A Comparative Approach to Art 9(4) of the European Union's Proposed AI Regulation (September 30, 2021). [LINK](https:\/\/ssrn.com\/abstract=3960461), [URL](http:\/\/dx.doi.org\/10.2139\/ssrn.3960461)\n\nMicrosoft. 2022. Microsoft Responsible AI Impact Assessment Template. (June 2022). [URL](https:\/\/blogs.microsoft.com\/wp-content\/uploads\/prod\/sites\/5\/2022\/06\/Microsoft-RAI-Impact-Assessment-Template.pdf)\n\nOffice of the Comptroller of the Currency. 2021. Comptroller's Handbook: Model Risk Management, Version 1.0, August 2021. [URL](https:\/\/www.occ.gov\/publications-and-resources\/publications\/comptrollers-handbook\/files\/model-risk-management\/index-model-risk-management.html)\n\nSolon Barocas, Asia J. Biega, Benjamin Fish, et al. 2020. When not to design, build, or deploy. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* '20). Association for Computing Machinery, New York, NY, USA, 695. [URL](https:\/\/doi.org\/10.1145\/3351095.3375691)",
"AI Actors":[
"AI Deployment",
"Operation and Monitoring",
"AI Impact Assessment"
],
"Topic":[
"Risk Response"
]
},
{
"type":"Manage",
"title":"MANAGE 2.1",
"category":"MANAGE-2",
"description":"Resources required to manage AI risks are taken into account, along with viable non-AI alternative systems, approaches, or methods \u2013 to reduce the magnitude or likelihood of potential impacts.",
"section_about":"Organizational risk response may entail identifying and analyzing alternative approaches, methods, processes or systems, and balancing tradeoffs between trustworthiness characteristics and how they relate to organizational principles and societal values. Analysis of these tradeoffs is informed by consulting with interdisciplinary organizational teams, independent domain experts, and engaging with individuals or community groups. These processes require sufficient resource allocation.",
"section_actions":"- Plan and implement risk management practices in accordance with established organizational risk tolerances.\n- Verify risk management teams are resourced to carry out functions, including\n\t- Establishing processes for considering methods that are not automated; semi-automated; or other procedural alternatives for AI functions. \n\t- Enhance AI system transparency mechanisms for AI teams.\n\t- Enable exploration of AI system limitations by AI teams. \n\t- Identify, assess, and catalog past failed designs and negative impacts or outcomes to avoid known failure modes.\n- Identify resource allocation approaches for managing risks in systems:\n\t- deemed high-risk,\n\t- that self-update (adaptive, online, reinforcement self-supervised learning or similar),\n\t- trained without access to ground truth (unsupervised, semi-supervised, learning or similar), \n\t- with high uncertainty or where risk management is insufficient.\n- Regularly seek and integrate external expertise and perspectives to supplement organizational diversity (e.g. demographic, disciplinary), equity, inclusion, and accessibility where internal capacity is lacking.\n- Enable and encourage regular, open communication and feedback among AI actors and internal or external stakeholders related to system design or deployment decisions.\n- Prepare and document plans for continuous monitoring and feedback mechanisms.",
"section_doc":"### Organizations can document the following\n\n- Are mechanisms in place to evaluate whether internal teams are empowered and resourced to effectively carry out risk management functions?\n- How will user and other forms of stakeholder engagement be integrated into risk management processes?\n\n### AI Transparency Resources\n\n- Artificial Intelligence Ethics Framework For The Intelligence Community. [URL](https:\/\/www.intelligence.gov\/artificial-intelligence-ethics-framework-for-the-intelligence-community) \n- Datasheets for Datasets. [URL](https:\/\/arxiv.org\/abs\/1803.09010)\n- GAO-21-519SP - Artificial Intelligence: An Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)",
"section_ref":"Board of Governors of the Federal Reserve System. SR 11-7: Guidance on Model Risk Management. (April 4, 2011). [URL](https:\/\/www.federalreserve.gov\/supervisionreg\/srletters\/sr1107.htm)\n\nDavid Wright. 2013. Making Privacy Impact Assessments More Effective. The Information Society, 29 (Oct 2013), 307-315. [URL](https:\/\/doi-org.proxygw.wrlc.org\/10.1080\/01972243.2013.825687)\n\nMargaret Mitchell, Simone Wu, Andrew Zaldivar, et al. 2019. Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). Association for Computing Machinery, New York, NY, USA, 220\u2013229. [URL](https:\/\/doi.org\/10.1145\/3287560.3287596)\n\nOffice of the Comptroller of the Currency. 2021. Comptroller's Handbook: Model Risk Management, Version 1.0, August 2021. [URL](https:\/\/www.occ.gov\/publications-and-resources\/publications\/comptrollers-handbook\/files\/model-risk-management\/index-model-risk-management.html)\n\nTimnit Gebru, Jamie Morgenstern, Briana Vecchione, et al. 2021. Datasheets for Datasets. arXiv:1803.09010. [URL](https:\/\/arxiv.org\/abs\/1803.09010)",
"AI Actors":[
"AI Deployment",
"Operation and Monitoring",
"AI Impact Assessment",
"Governance and Oversight"
],
"Topic":[
"Risk Tolerance",
"Trade-offs"
]
},
{
"type":"Manage",
"title":"MANAGE 2.2",
"category":"MANAGE-2",
"description":"Mechanisms are in place and applied to sustain the value of deployed AI systems.",
"section_about":"System performance and trustworthiness may evolve and shift over time, once an AI system is deployed and put into operation. This phenomenon, generally known as drift, can degrade the value of the AI system to the organization and increase the likelihood of negative impacts. Regular monitoring of AI systems\u2019 performance and trustworthiness enhances organizations\u2019 ability to detect and respond to drift, and thus sustain an AI system\u2019s value once deployed. Processes and mechanisms for regular monitoring address system functionality and behavior - as well as impacts and alignment with the values and norms within the specific context of use. For example, considerations regarding impacts on personal or public safety or privacy may include limiting high speeds when operating autonomous vehicles or restricting illicit content recommendations for minors. \n\nRegular monitoring activities can enable organizations to systematically and proactively identify emergent risks and respond according to established protocols and metrics. Options for organizational responses include 1) avoiding the risk, 2)accepting the risk, 3) mitigating the risk, or 4) transferring the risk. Each of these actions require planning and resources. Organizations are encouraged to establish risk management protocols with consideration of the trustworthiness characteristics, the deployment context, and real world impacts.",
"section_actions":"- Establish risk controls considering trustworthiness characteristics, including:\n\t- Data management, quality, and privacy (e.g. minimization, rectification or deletion requests) controls as part of organizational data governance policies. \n\t- Machine learning and end-point security countermeasures (e.g., robust models, differential privacy, authentication, throttling).\n\t- Business rules that augment, limit or restrict AI system outputs within certain contexts \n\t- Utilizing domain expertise related to deployment context for continuous improvement and TEVV across the AI lifecycle.\n\t- Development and regular tracking of human-AI teaming configurations.\n\t- Model assessment and test, evaluation, validation and verification (TEVV) protocols.\n\t- Use of standardized documentation and transparency mechanisms.\n\t- Software quality assurance practices across AI lifecycle.\n\t- Mechanisms to explore system limitations and avoid past failed designs or deployments.\n- Establish mechanisms to capture feedback from system end users and potentially impacted groups.\n- Review insurance policies, warranties, or contracts for legal or oversight requirements for risk transfer procedures.\n- Document risk tolerance decisions and risk acceptance procedures.",
"section_doc":"### Organizations can document the following\n\n- To what extent can users or parties affected by the outputs of the AI system test the AI system and provide feedback?\n- Could the AI system expose people to harm or negative impacts? What was done to mitigate or reduce the potential for harm?\n- How will the accountable human(s) address changes in accuracy and precision due to either an adversary\u2019s attempts to disrupt the AI or unrelated changes in the operational or business environment?\n\n### AI Transparency Resources\n\n- GAO-21-519SP - Artificial Intelligence: An Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- Artificial Intelligence Ethics Framework For The Intelligence Community. [URL](https:\/\/www.intelligence.gov\/artificial-intelligence-ethics-framework-for-the-intelligence-community)",
"section_ref":"### Safety, Validity and Reliability Risk Management Approaches and Resources\n\nAI Incident Database. 2022. AI Incident Database. [URL](https:\/\/incidentdatabase.ai\/)\n\nAIAAIC Repository. 2022. AI, algorithmic and automation incidents collected, dissected, examined, and divulged. [URL](https:\/\/www.aiaaic.org\/aiaaic-repository)\n\nAlexander D'Amour, Katherine Heller, Dan Moldovan, et al. 2020. Underspecification Presents Challenges for Credibility in Modern Machine Learning. arXiv:2011.03395. [URL](https:\/\/arxiv.org\/abs\/2011.03395)\n\nAndrew L. Beam, Arjun K. Manrai, Marzyeh Ghassemi. 2020. Challenges to the Reproducibility of Machine Learning Models in Health Care. Jama 323, 4 (January 6, 2020), 305-306. [URL](https:\/\/doi.org\/10.1001\/jama.2019.20866)\n\nAnthony M. Barrett, Dan Hendrycks, Jessica Newman et al. 2022. Actionable Guidance for High-Consequence AI Risk Management: Towards Standards Addressing AI Catastrophic Risks. arXiv:2206.08966. [URL](https:\/\/doi.org\/10.48550\/arXiv.2206.08966)\n\nDebugging Machine Learning Models, In Proceedings of ICLR 2019 Workshop, May 6, 2019, New Orleans, Louisiana. [URL](https:\/\/debug-ml-iclr2019.github.io\/)\n\nJessie J. Smith, Saleema Amershi, Solon Barocas, et al. 2022. REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research. arXiv:2205.08363. [URL](https:\/\/arxiv.org\/abs\/2205.08363)\n\nJoelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, et al. 2020. Improving Reproducibility in Machine Learning Research (A Report from the NeurIPS 2019 Reproducibility Program) arXiv:2003.12206. [URL](https:\/\/doi.org\/10.48550\/arXiv.2003.12206)\n\nKirstie Whitaker. 2017. Showing your working: a how to guide to reproducible research. (August 2017). [LINK](https:\/\/github.com\/WhitakerLab\/ReproducibleResearch\/blob\/master\/PRESENTATIONS\/Whitaker_ICON_August2017.pdf), [URL](https:\/\/doi.org\/10.6084\/m9.figshare.4244996.v2)\n\nNetflix. Chaos Monkey. [URL](https:\/\/netflix.github.io\/chaosmonkey\/)\n\nPeter Henderson, Riashat Islam, Philip Bachman, et al. 2018. Deep reinforcement learning that matters. Proceedings of the AAAI Conference on Artificial Intelligence. 32, 1 (Apr. 2018). [URL](https:\/\/doi.org\/10.1609\/aaai.v32i1.11694)\n\nSuchi Saria, Adarsh Subbaswamy. 2019. Tutorial: Safe and Reliable Machine Learning. arXiv:1904.07204. [URL](https:\/\/doi.org\/10.48550\/arXiv.1904.07204)\n\nKang, Daniel, Deepti Raghavan, Peter Bailis, and Matei Zaharia. \"Model assertions for monitoring and improving ML models.\" Proceedings of Machine Learning and Systems 2 (2020): 481-496. [URL](https:\/\/proceedings.mlsys.org\/paper\/2020\/file\/a2557a7b2e94197ff767970b67041697-Paper.pdf)\n\n### Managing Risk Bias\n\nNational Institute of Standards and Technology (NIST), Reva Schwartz, Apostol Vassilev, et al. 2022. NIST Special Publication 1270 Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. [URL](https:\/\/nvlpubs.nist.gov\/nistpubs\/SpecialPublications\/NIST.SP.1270.pdf)\n\n### Bias Testing and Remediation Approaches \n\nAlekh Agarwal, Alina Beygelzimer, Miroslav Dud\u00edk, et al. 2018. A Reductions Approach to Fair Classification. arXiv:1803.02453. [URL](https:\/\/doi.org\/10.48550\/arXiv.1803.02453)\n\nBrian Hu Zhang, Blake Lemoine, Margaret Mitchell. 2018. Mitigating Unwanted Biases with Adversarial Learning. arXiv:1801.07593. [URL](https:\/\/doi.org\/10.48550\/arXiv.1801.07593)\n\nDrago Ple\u010dko, Nicolas Bennett, Nicolai Meinshausen. 2021. Fairadapt: Causal Reasoning for Fair Data Pre-processing. arXiv:2110.10200. [URL](https:\/\/doi.org\/10.48550\/arXiv.2110.10200)\n\nFaisal Kamiran, Toon Calders. 2012. Data Preprocessing Techniques for Classification without Discrimination. Knowledge and Information Systems 33 (2012), 1\u201333. [URL](https:\/\/doi.org\/10.1007\/s10115-011-0463-8)\n\nFaisal Kamiran; Asim Karim; Xiangliang Zhang. 2012. Decision Theory for Discrimination-Aware Classification. In Proceedings of the 2012 IEEE 12th International Conference on Data Mining, December 10-13, 2012, Brussels, Belgium. IEEE, 924-929. [URL](https:\/\/doi.org\/10.1109\/ICDM.2012.45)\n\nFlavio P. Calmon, Dennis Wei, Karthikeyan Natesan Ramamurthy, et al. 2017. Optimized Data Pre-Processing for Discrimination Prevention. arXiv:1704.03354. [URL](https:\/\/doi.org\/10.48550\/arXiv.1704.03354)\n\nGeoff Pleiss, Manish Raghavan, Felix Wu, et al. 2017. On Fairness and Calibration. arXiv:1709.02012. [URL](https:\/\/doi.org\/10.48550\/arXiv.1709.02012)\n\nL. Elisa Celis, Lingxiao Huang, Vijay Keswani, et al. 2020. Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees. arXiv:1806.06055. [URL](https:\/\/doi.org\/10.48550\/arXiv.1806.06055)\n\nMichael Feldman, Sorelle Friedler, John Moeller, et al. 2014. Certifying and Removing Disparate Impact. arXiv:1412.3756. [URL](https:\/\/doi.org\/10.48550\/arXiv.1412.3756)\n\nMichael Kearns, Seth Neel, Aaron Roth, et al. 2017. Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness. arXiv:1711.05144. [URL](https:\/\/doi.org\/10.48550\/arXiv.1711.05144)\n\nMichael Kearns, Seth Neel, Aaron Roth, et al. 2018. An Empirical Study of Rich Subgroup Fairness for Machine Learning. arXiv:1808.08166. [URL](https:\/\/doi.org\/10.48550\/arXiv.1808.08166)\n\nMoritz Hardt, Eric Price, and Nathan Srebro. 2016. Equality of Opportunity in Supervised Learning. In Proceedings of the 30th Conference on Neural Information Processing Systems (NIPS 2016), 2016, Barcelona, Spain. [URL](https:\/\/papers.nips.cc\/paper\/2016\/file\/9d2682367c3935defcb1f9e247a97c0d-Paper.pdf)\n\nRich Zemel, Yu Wu, Kevin Swersky, et al. 2013. Learning Fair Representations. In Proceedings of the 30th International Conference on Machine Learning 2013, PMLR 28, 3, 325-333. [URL](http:\/\/proceedings.mlr.press\/v28\/zemel13.html)\n\nToshihiro Kamishima, Shotaro Akaho, Hideki Asoh & Jun Sakuma. 2012. Fairness-Aware Classifier with Prejudice Remover Regularizer. In Peter A. Flach, Tijl De Bie, Nello Cristianini (eds) Machine Learning and Knowledge Discovery in Databases. European Conference ECML PKDD 2012, Proceedings Part II, September 24-28, 2012, Bristol, UK. Lecture Notes in Computer Science 7524. Springer, Berlin, Heidelberg. [URL](https:\/\/doi.org\/10.1007\/978-3-642-33486-3_3)\n\n### Security and Resilience Resources\n\nFTC Start With Security Guidelines. 2015. [URL](https:\/\/www.ftc.gov\/system\/files\/documents\/plain-language\/pdf0205-startwithsecurity.pdf) \n\nGary McGraw et al. 2022. BIML Interactive Machine Learning Risk Framework. Berryville Institute for Machine Learning. [URL](https:\/\/berryvilleiml.com\/interactive\/)\n\nIlia Shumailov, Yiren Zhao, Daniel Bates, et al. 2021. Sponge Examples: Energy-Latency Attacks on Neural Networks. arXiv:2006.03463. [URL](https:\/\/doi.org\/10.48550\/arXiv.2006.03463)\n\nMarco Barreno, Blaine Nelson, Anthony D. Joseph, et al. 2010. The Security of Machine Learning. Machine Learning 81 (2010), 121-148. [URL](https:\/\/doi.org\/10.1007\/s10994-010-5188-5)\n\nMatt Fredrikson, Somesh Jha, Thomas Ristenpart. 2015. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (CCS '15), October 2015. Association for Computing Machinery, New York, NY, USA, 1322\u20131333. [URL](https:\/\/doi.org\/10.1145\/2810103.2813677)\n\nNational Institute for Standards and Technology (NIST). 2022. Cybersecurity Framework. [URL](https:\/\/www.nist.gov\/cyberframework)\n\nNicolas Papernot. 2018. A Marauder's Map of Security and Privacy in Machine Learning. arXiv:1811.01134. [URL](https:\/\/doi.org\/10.48550\/arXiv.1811.01134)\n\nReza Shokri, Marco Stronati, Congzheng Song, et al. 2017. Membership Inference Attacks against Machine Learning Models. arXiv:1610.05820. [URL](https:\/\/doi.org\/10.48550\/arXiv.1610.05820)\n\nAdversarial Threat Matrix (MITRE). 2021. [URL](https:\/\/github.com\/mitre\/advmlthreatmatrix)\n\n### Interpretability and Explainability Approaches\n\nChaofan Chen, Oscar Li, Chaofan Tao, et al. 2019. This Looks Like That: Deep Learning for Interpretable Image Recognition. arXiv:1806.10574. [URL](https:\/\/doi.org\/10.48550\/arXiv.1806.10574)\n\nCynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. arXiv:1811.10154. [URL](https:\/\/doi.org\/10.48550\/arXiv.1811.10154)\n\nDaniel W. Apley, Jingyu Zhu. 2019. Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models. arXiv:1612.08468. [URL](https:\/\/doi.org\/10.48550\/arXiv.1612.08468)\n\nDavid A. Broniatowski. 2021. Psychological Foundations of Explainability and Interpretability in Artificial Intelligence. National Institute of Standards and Technology (NIST) IR 8367. National Institute of Standards and Technology, Gaithersburg, MD. [URL](https:\/\/doi.org\/10.6028\/NIST.IR.8367)\n\nForough Poursabzi-Sangdeh, Daniel G. Goldstein, Jake M. Hofman, et al. 2021. Manipulating and Measuring Model Interpretability. arXiv:1802.07810. [URL](https:\/\/doi.org\/10.48550\/arXiv.1802.07810)\n\nHongyu Yang, Cynthia Rudin, Margo Seltzer. 2017. Scalable Bayesian Rule Lists. arXiv:1602.08610. [URL](https:\/\/doi.org\/10.48550\/arXiv.1602.08610)\n\nP. Jonathon Phillips, Carina A. Hahn, Peter C. Fontana, et al. 2021. Four Principles of Explainable Artificial Intelligence. National Institute of Standards and Technology (NIST) IR 8312. National Institute of Standards and Technology, Gaithersburg, MD. [URL](https:\/\/doi.org\/10.6028\/NIST.IR.8312)\n\nScott Lundberg, Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. arXiv:1705.07874. [URL](https:\/\/doi.org\/10.48550\/arXiv.1705.07874)\n\nSusanne Gaube, Harini Suresh, Martina Raue, et al. 2021. Do as AI say: susceptibility in deployment of clinical decision-aids. npj Digital Medicine 4, Article 31 (2021). [URL](https:\/\/doi.org\/10.1038\/s41746-021-00385-9)\n\nYin Lou, Rich Caruana, Johannes Gehrke, et al. 2013. Accurate intelligible models with pairwise interactions. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD '13), August 2013. Association for Computing Machinery, New York, NY, USA, 623\u2013631. [URL](https:\/\/doi.org\/10.1145\/2487575.2487579)\n\n### Privacy Resources\n\nNational Institute for Standards and Technology (NIST). 2022. Privacy Framework. [URL](https:\/\/www.nist.gov\/privacy-framework)\n\n### Data Governance\n\nMarijn Janssen, Paul Brous, Elsa Estevez, Luis S. Barbosa, Tomasz Janowski, Data governance: Organizing data for trustworthy Artificial Intelligence, Government Information Quarterly, Volume 37, Issue 3, 2020, 101493, ISSN 0740-624X. [URL](https:\/\/doi.org\/10.1016\/j.giq.2020.101493)\n\n### Software Resources\n\n- [PiML](https:\/\/github.com\/SelfExplainML\/PiML-Toolbox) (explainable models, performance assessment)\n- [Interpret](https:\/\/github.com\/interpretml\/interpret) (explainable models)\n- [Iml](https:\/\/cran.r-project.org\/web\/packages\/iml\/index.html) (explainable models)\n- [Drifter](https:\/\/github.com\/ModelOriented\/drifter) library (performance assessment)\n- [Manifold](https:\/\/github.com\/uber\/manifold) library (performance assessment)\n- [SALib](https:\/\/github.com\/SALib\/SALib) library (performance assessment)\n- [What-If Tool](https:\/\/pair-code.github.io\/what-if-tool\/index.html#about) (performance assessment)\n- [MLextend](http:\/\/rasbt.github.io\/mlxtend\/) (performance assessment)\n- AI Fairness 360: \n - [Python](https:\/\/github.com\/Trusted-AI\/AIF360) (bias testing and mitigation)\n - [R](https:\/\/github.com\/Trusted-AI\/AIF360\/tree\/master\/aif360\/aif360-r) (bias testing and mitigation)\n- [Adversarial-robustness-toolbox](https:\/\/github.com\/Trusted-AI\/adversarial-robustness-toolbox) (ML security)\n- [Robustness](https:\/\/github.com\/MadryLab\/robustness) (ML security)\n- [tensorflow\/privacy](https:\/\/github.com\/tensorflow\/privacy) (ML security)\n- [NIST De-identification Tools](https:\/\/www.nist.gov\/itl\/applied-cybersecurity\/privacy-engineering\/collaboration-space\/focus-areas\/de-id\/tools) (Privacy and ML security)\n- [Dvc](https:\/\/dvc.org\/) (MLops, deployment)\n- [Gigantum](https:\/\/github.com\/gigantum) (MLops, deployment)\n- [Mlflow](https:\/\/mlflow.org\/) (MLops, deployment)\n- [Mlmd](https:\/\/github.com\/google\/ml-metadata) (MLops, deployment)\n- [Modeldb](https:\/\/github.com\/VertaAI\/modeldb) (MLops, deployment)",
"AI Actors":[
"AI Deployment",
"Operation and Monitoring",
"AI Impact Assessment",
"Governance and Oversight"
],
"Topic":[
"AI Deployment",
"Drift",
"Societal Values"
]
},
{
"type":"Manage",
"title":"MANAGE 2.3",
"category":"MANAGE-2",
"description":"Procedures are followed to respond to and recover from a previously unknown risk when it is identified.",
"section_about":"AI systems \u2013 like any technology \u2013 can demonstrate non-functionality or failure or unexpected and unusual behavior. They also can be subject to attacks, incidents, or other misuse or abuse \u2013 which their sources are not always known apriori. Organizations can establish, document, communicate and maintain treatment procedures to recognize and counter, mitigate and manage risks that were not previously identified.",
"section_actions":"- Protocols, resources, and metrics are in place for continual monitoring of AI systems\u2019 performance, trustworthiness, and alignment with contextual norms and values \n- Establish and regularly review treatment and response plans for incidents, negative impacts, or outcomes.\n- Establish and maintain procedures to regularly monitor system components for drift, decontextualization, or other AI system behavior factors, \n- Establish and maintain procedures for capturing feedback about negative impacts.\n- Verify contingency processes to handle any negative impacts associated with mission-critical AI systems, and to deactivate systems.\n- Enable preventive and post-hoc exploration of AI system limitations by relevant AI actor groups.\n- Decommission systems that exceed risk tolerances.",
"section_doc":"### Organizations can document the following\n\n- Who will be responsible for maintaining, re-verifying, monitoring, and updating this AI once deployed?\n- Are the responsibilities of the personnel involved in the various AI governance processes clearly defined? (Including responsibilities to decommission the AI system.)\n- What processes exist for data generation, acquisition\/collection, ingestion, staging\/storage, transformations, security, maintenance, and dissemination?\n- How will the appropriate performance metrics, such as accuracy, of the AI be monitored after the AI is deployed? \n\n### AI Transparency Resources\n\n- Artificial Intelligence Ethics Framework For The Intelligence Community. [URL](https:\/\/www.intelligence.gov\/artificial-intelligence-ethics-framework-for-the-intelligence-community) \n- WEF - Companion to the Model AI Governance Framework \u2013 Implementation and Self-Assessment Guide for Organizations. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/files\/pdpc\/pdf-files\/resource-for-organisation\/ai\/sgisago.ashx)\n- GAO-21-519SP - Artificial Intelligence: An Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)",
"section_ref":"AI Incident Database. 2022. AI Incident Database. [URL](https:\/\/incidentdatabase.ai\/)\n\nAIAAIC Repository. 2022. AI, algorithmic and automation incidents collected, dissected, examined, and divulged. [URL](https:\/\/www.aiaaic.org\/aiaaic-repository)\n\nAndrew Burt and Patrick Hall. 2018. What to Do When AI Fails. O\u2019Reilly Media, Inc. (May 18, 2020). Retrieved October 17, 2022. [URL](https:\/\/www.oreilly.com\/radar\/what-to-do-when-ai-fails\/)\n\nNational Institute for Standards and Technology (NIST). 2022. Cybersecurity Framework. [URL](https:\/\/www.nist.gov\/cyberframework)\n\nSANS Institute. 2022. Security Consensus Operational Readiness Evaluation (SCORE) Security Checklist [or Advanced Persistent Threat (APT) Handling Checklist]. [URL](https:\/\/www.sans.org\/media\/score\/checklists\/APT-IncidentHandling-Checklist.pdf)\n\nSuchi Saria, Adarsh Subbaswamy. 2019. Tutorial: Safe and Reliable Machine Learning. arXiv:1904.07204. [URL](https:\/\/doi.org\/10.48550\/arXiv.1904.07204)",
"AI Actors":[
"AI Deployment",
"Operation and Monitoring"
],
"Topic":[
"Risk Response"
]
},
{
"type":"Manage",
"title":"MANAGE 2.4",
"category":"MANAGE-2",
"description":"Mechanisms are in place and applied, responsibilities are assigned and understood to supersede, disengage, or deactivate AI systems that demonstrate performance or outcomes inconsistent with intended use.",
"section_about":"Performance inconsistent with intended use does not always increase risk or lead to negative impacts. Rigorous TEVV practices are useful for protecting against negative impacts regardless of intended use. When negative impacts do arise, superseding (bypassing), disengaging, or deactivating\/decommissioning a model, AI system component(s), or the entire AI system may be necessary, such as when: \n\n- a system reaches the end of its lifetime\n- detected or identified risks exceed tolerance thresholds\n- adequate system mitigation actions are beyond the organization\u2019s capacity\n- feasible system mitigation actions do not meet regulatory, legal, norms or standards. \n- impending risk is detected during continual monitoring, for which feasible mitigation cannot be identified or implemented in a timely fashion. \n\nSafely removing AI systems from operation, either temporarily or permanently, under these scenarios requires standard protocols that minimize operational disruption and downstream negative impacts. Protocols can involve redundant or backup systems that are developed in alignment with established system governance policies (see GOVERN 1.7), regulatory compliance, legal frameworks, business requirements and norms and l standards within the application context of use. Decision thresholds and metrics for actions to bypass or deactivate system components are part of continual monitoring procedures. Incidents that result in a bypass\/deactivate decision require documentation and review to understand root causes, impacts, and potential opportunities for mitigation and redeployment. Organizations are encouraged to develop risk and change management protocols that consider and anticipate upstream and downstream consequences of both temporary and\/or permanent decommissioning, and provide contingency options.",
"section_actions":"- Regularly review established procedures for AI system bypass actions, including plans for redundant or backup systems to ensure continuity of operational and\/or business functionality.\n- Regularly review Identify system incident thresholds for activating bypass or deactivation responses.\n- Apply change management processes to understand the upstream and downstream consequences of bypassing or deactivating an AI system or AI system components.\n- Apply protocols, resources and metrics for decisions to supersede, bypass or deactivate AI systems or AI system components.\n- Preserve materials for forensic, regulatory, and legal review.\n- Conduct internal root cause analysis and process reviews of bypass or deactivation events. \n- Decommission and preserve system components that cannot be updated to meet criteria for redeployment.\n- Establish criteria for redeploying updated system components, in consideration of trustworthy characteristics",
"section_doc":"### Organizations can document the following\n\n- What are the roles, responsibilities, and delegation of authorities of personnel involved in the design, development, deployment, assessment and monitoring of the AI system?\n- Did your organization implement a risk management system to address risks involved in deploying the identified AI solution (e.g. personnel risk or changes to commercial objectives)?\n- What testing, if any, has the entity conducted on the AI system to identify errors and limitations (i.e. adversarial or stress testing)?\n- To what extent does the entity have established procedures for retiring the AI system, if it is no longer needed?\n- How did the entity use assessments and\/or evaluations to determine if the system can be scaled up, continue, or be decommissioned?\n\n### AI Transparency Resources\n\n- GAO-21-519SP - Artificial Intelligence: An Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)",
"section_ref":"Decommissioning Template. Application Lifecycle And Supporting Docs. Cloud and Infrastructure Community of Practice. [URL](https:\/\/www.cio.gov\/policies-and-priorities\/application-lifecycle\/)\n\nDevelop a Decommission Plan. M3 Playbook. Office of Shared Services and Solutions and Performance Improvement. General Services Administration. [URL](https:\/\/ussm.gsa.gov\/2.8\/)",
"AI Actors":[
"AI Deployment",
"Operation and Monitoring",
"Governance and Oversight"
],
"Topic":[
"Risk Response",
"Decommission",
"Risky Emergent Behavior"
]
},
{
"type":"Manage",
"title":"MANAGE 3.1",
"category":"MANAGE-3",
"description":"AI risks and benefits from third-party resources are regularly monitored, and risk controls are applied and documented.",
"section_about":"AI systems may depend on external resources and associated processes, including third-party data, software or hardware systems. Third parties\u2019 supplying organizations with components and services, including tools, software, and expertise for AI system design, development, deployment or use can improve efficiency and scalability. It can also increase complexity and opacity, and, in-turn, risk. Documenting third-party technologies, personnel, and resources that were employed can help manage risks. Focusing first and foremost on risks involving physical safety, legal liabilities, regulatory compliance, and negative impacts on individuals, groups, or society is recommended.",
"section_actions":"- Have legal requirements been addressed?\n- Apply organizational risk tolerance to third-party AI systems.\n- Apply and document organizational risk management plans and practices to third-party AI technology, personnel, or other resources.\n- Identify and maintain documentation for third-party AI systems and components.\n- Establish testing, evaluation, validation and verification processes for third-party AI systems which address the needs for transparency without exposing proprietary algorithms .\n- Establish processes to identify beneficial use and risk indicators in third-party systems or components, such as inconsistent software release schedule, sparse documentation, and incomplete software change management (e.g., lack of forward or backward compatibility).\n- Organizations can establish processes for third parties to report known and potential vulnerabilities, risks or biases in supplied resources.\n- Verify contingency processes for handling negative impacts associated with mission-critical third-party AI systems.\n- Monitor third-party AI systems for potential negative impacts and risks associated with trustworthiness characteristics.\n- Decommission third-party systems that exceed risk tolerances.",
"section_doc":"### Organizations can document the following\n\n- If a third party created the AI system or some of its components, how will you ensure a level of explainability or interpretability? Is there documentation?\n- If your organization obtained datasets from a third party, did your organization assess and manage the risks of using such datasets?\n- Did you establish a process for third parties (e.g. suppliers, end users, subjects, distributors\/vendors or workers) to report potential vulnerabilities, risks or biases in the AI system?\n- Have legal requirements been addressed?\n\n### AI Transparency Resources\n\n- Artificial Intelligence Ethics Framework For The Intelligence Community. [URL](https:\/\/www.intelligence.gov\/artificial-intelligence-ethics-framework-for-the-intelligence-community)\n- WEF - Companion to the Model AI Governance Framework \u2013 Implementation and Self-Assessment Guide for Organizations. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/files\/pdpc\/pdf-files\/resource-for-organisation\/ai\/sgisago.ashx)\n- Datasheets for Datasets. [URL](https:\/\/arxiv.org\/abs\/1803.09010)",
"section_ref":"Office of the Comptroller of the Currency. 2021. Proposed Interagency Guidance on Third-Party Relationships: Risk Management. July 12, 2021. [URL](https:\/\/www.occ.gov\/news-issuances\/news-releases\/2021\/nr-occ-2021-74a.pdf)",
"AI Actors":[
"Third-party entities",
"Operation and Monitoring",
"AI Deployment"
],
"Topic":[
"Third-party",
"Supply Chain"
]
},
{
"type":"Manage",
"title":"MANAGE 3.2",
"category":"MANAGE-3",
"description":"Pre-trained models which are used for development are monitored as part of AI system regular monitoring and maintenance.",
"section_about":"A common approach in AI development is transfer learning, whereby an existing pre-trained model is adapted for use in a different, but related application. AI actors in development tasks often use pre-trained models from third-party entities for tasks such as image classification, language prediction, and entity recognition, because the resources to build such models may not be readily available to most organizations. Pre-trained models are typically trained to address various classification or prediction problems, using exceedingly large datasets and computationally intensive resources. The use of pre-trained models can make it difficult to anticipate negative system outcomes or impacts. Lack of documentation or transparency tools increases the difficulty and general complexity when deploying pre-trained models and hinders root cause analyses.",
"section_actions":"- Identify pre-trained models within AI system inventory for risk tracking.\n- Establish processes to independently and continually monitor performance and trustworthiness of pre-trained models, and as part of third-party risk tracking. \n- Monitor performance and trustworthiness of AI system components connected to pre-trained models, and as part of third-party risk tracking.\n- Identify, document and remediate risks arising from AI system components and pre-trained models per organizational risk management procedures, and as part of third-party risk tracking.\n- Decommission AI system components and pre-trained models which exceed risk tolerances, and as part of third-party risk tracking.",
"section_doc":"### Organizations can document the following\n\n- How has the entity documented the AI system\u2019s data provenance, including sources, origins, transformations, augmentations, labels, dependencies, constraints, and metadata?\n- Does this dataset collection\/processing procedure achieve the motivation for creating the dataset stated in the first section of this datasheet?\n- How does the entity ensure that the data collected are adequate, relevant, and not excessive in relation to the intended purpose?\n- If the dataset becomes obsolete how will this be communicated?\n\n### AI Transparency Resources\n\n- Artificial Intelligence Ethics Framework For The Intelligence Community. [URL](https:\/\/www.intelligence.gov\/artificial-intelligence-ethics-framework-for-the-intelligence-community)\n- WEF - Companion to the Model AI Governance Framework \u2013 Implementation and Self-Assessment Guide for Organizations. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/files\/pdpc\/pdf-files\/resource-for-organisation\/ai\/sgisago.ashx)\n- Datasheets for Datasets. [URL](https:\/\/arxiv.org\/abs\/1803.09010)",
"section_ref":"Larysa Visengeriyeva et al. \u201cAwesome MLOps,\u201c GitHub. Accessed January 9, 2023. [URL](https:\/\/github.com\/visenger)",
"AI Actors":[
"Third-party entities",
"Operation and Monitoring",
"AI Deployment"
],
"Topic":[
"Pre-trained models",
"Monitoring"
]
},
{
"type":"Manage",
"title":"MANAGE 4.1",
"category":"MANAGE-4",
"description":"Post-deployment AI system monitoring plans are implemented, including mechanisms for capturing and evaluating input from users and other relevant AI actors, appeal and override, decommissioning, incident response, recovery, and change management.",
"section_about":"AI system performance and trustworthiness can change due to a variety of factors. Regular AI system monitoring can help deployers identify performance degradations, adversarial attacks, unexpected and unusual behavior, near-misses, and impacts. Including pre- and post-deployment external feedback about AI system performance can enhance organizational awareness about positive and negative impacts, and reduce the time to respond to risks and harms.",
"section_actions":"- Establish and maintain procedures to monitor AI system performance for risks and negative and positive impacts associated with trustworthiness characteristics. \n- Perform post-deployment TEVV tasks to evaluate AI system validity and reliability, bias and fairness, privacy, and security and resilience.\n- Evaluate AI system trustworthiness in conditions similar to deployment context of use, and prior to deployment.\n- Establish and implement red-teaming exercises at a prescribed cadence, and evaluate their efficacy. \n- Establish procedures for tracking dataset modifications such as data deletion or rectification requests.\n- Establish mechanisms for regular communication and feedback between relevant AI actors and internal or external stakeholders to capture information about system performance, trustworthiness and impact.\n- Share information about errors, near-misses, and attack patterns with incident databases, other organizations with similar systems, and system users and stakeholders.\n- Respond to and document detected or reported negative impacts or issues in AI system performance and trustworthiness.\n- Decommission systems that exceed establish risk tolerances.",
"section_doc":"### Organizations can document the following\n\n- To what extent has the entity documented the post-deployment AI system\u2019s testing methodology, metrics, and performance outcomes?\n- How easily accessible and current is the information available to external stakeholders?\n\n### AI Transparency Resources\n\n- GAO-21-519SP - Artificial Intelligence: An Accountability Framework for Federal Agencies & Other Entities, [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- Datasheets for Datasets. [URL](https:\/\/arxiv.org\/abs\/1803.09010)",
"section_ref":"Navdeep Gill, Patrick Hall, Kim Montgomery, and Nicholas Schmidt. \"A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing.\" Information 11, no. 3 (2020): 137. [URL](https:\/\/www.mdpi.com\/2078-2489\/11\/3\/137)",
"AI Actors":[
"AI Deployment",
"Operation and Monitoring",
"End-Users",
"Human Factors",
"Domain Experts",
"Affected Individuals and Communities"
],
"Topic":[
"Monitoring",
"Participation",
"AI Deployment",
"AI Incidents",
"Risk Response",
"Adversarial",
"Risky Emergent Behavior"
]
},
{
"type":"Manage",
"title":"MANAGE 4.2",
"category":"MANAGE-4",
"description":"Measurable activities for continual improvements are integrated into AI system updates and include regular engagement with interested parties, including relevant AI actors.",
"section_about":"Regular monitoring processes enable system updates to enhance performance and functionality in accordance with regulatory and legal frameworks, and organizational and contextual values and norms. These processes also facilitate analyses of root causes, system degradation, drift, near-misses, and failures, and incident response and documentation. \n\nAI actors across the lifecycle have many opportunities to capture and incorporate external feedback about system performance, limitations, and impacts, and implement continuous improvements. Improvements may not always be to model pipeline or system processes, and may instead be based on metrics beyond accuracy or other quality performance measures. In these cases, improvements may entail adaptations to business or organizational procedures or practices. Organizations are encouraged to develop improvements that will maintain traceability and transparency for developers, end users, auditors, and relevant AI actors.",
"section_actions":"- Integrate trustworthiness characteristics into protocols and metrics used for continual improvement.\n- Establish processes for evaluating and integrating feedback into AI system improvements.\n- Assess and evaluate alignment of proposed improvements with relevant regulatory and legal frameworks\n- Assess and evaluate alignment of proposed improvements connected to the values and norms within the context of use.\n- Document the basis for decisions made relative to tradeoffs between trustworthy characteristics, system risks, and system opportunities",
"section_doc":"### Organizations can document the following\n\n- How will user and other forms of stakeholder engagement be integrated into the model development process and regular performance review once deployed?\n- To what extent can users or parties affected by the outputs of the AI system test the AI system and provide feedback?\n- To what extent has the entity defined and documented the regulatory environment\u2014including minimum requirements in laws and regulations?\n\n### AI Transparency Resources\n\n- GAO-21-519SP - Artificial Intelligence: An Accountability Framework for Federal Agencies & Other Entities, [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- Artificial Intelligence Ethics Framework For The Intelligence Community. [URL](https:\/\/www.intelligence.gov\/artificial-intelligence-ethics-framework-for-the-intelligence-community)",
"section_ref":"Yen, Po-Yin, et al. \"Development and Evaluation of Socio-Technical Metrics to Inform HIT Adaptation.\" [URL](https:\/\/digital.ahrq.gov\/sites\/default\/files\/docs\/citation\/r21hs024767-yen-final-report-2019.pdf)\n\nCarayon, Pascale, and Megan E. Salwei. \"Moving toward a sociotechnical systems approach to continuous health information technology design: the path forward for improving electronic health record usability and reducing clinician burnout.\" Journal of the American Medical Informatics Association 28.5 (2021): 1026-1028. [URL](https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC8068435\/pdf\/ocab002.pdf)\n\nMishra, Deepa, et al. \"Organizational capabilities that enable big data and predictive analytics diffusion and organizational performance: A resource-based perspective.\" Management Decision (2018).",
"AI Actors":[
"TEVV",
"AI Design",
"AI Development",
"AI Deployment",
"Operation and Monitoring",
"End-Users",
"Affected Individuals and Communities"
],
"Topic":[
"Monitoring",
"Impact Assessment",
"Risk Assessment"
]
},
{
"type":"Manage",
"title":"MANAGE 4.3",
"category":"MANAGE-4",
"description":"Incidents and errors are communicated to relevant AI actors including affected communities. Processes for tracking, responding to, and recovering from incidents and errors are followed and documented.",
"section_about":"Regularly documenting an accurate and transparent account of identified and reported errors can enhance AI risk management activities., Examples include:\n\n- how errors were identified, \n- incidents related to the error, \n- whether the error has been repaired, and\n- how repairs can be distributed to all impacted stakeholders and users.",
"section_actions":"- Establish procedures to regularly share information about errors, incidents and negative impacts with relevant stakeholders, operators, practitioners and users, and impacted parties.\n- Maintain a database of reported errors, near-misses, incidents and negative impacts including date reported, number of reports, assessment of impact and severity, and responses.\n- Maintain a database of system changes, reason for change, and details of how the change was made, tested and deployed. \n- Maintain version history information and metadata to enable continuous improvement processes.\n- Verify that relevant AI actors responsible for identifying complex or emergent risks are properly resourced and empowered.",
"section_doc":"### Organizations can document the following\n\n- What corrective actions has the entity taken to enhance the quality, accuracy, reliability, and representativeness of the data?\n- To what extent does the entity communicate its AI strategic goals and objectives to the community of stakeholders? How easily accessible and current is the information available to external stakeholders?\n- What type of information is accessible on the design, operations, and limitations of the AI system to external stakeholders, including end users, consumers, regulators, and individuals impacted by use of the AI system?\n\n### AI Transparency Resources\n\n- GAO-21-519SP: Artificial Intelligence: An Accountability Framework for Federal Agencies & Other Entities, [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)",
"section_ref":"Wei, M., & Zhou, Z. (2022). AI Ethics Issues in Real World: Evidence from AI Incident Database. ArXiv, abs\/2206.07635. [URL](https:\/\/arxiv.org\/pdf\/2206.07635.pdf)\n\nMcGregor, Sean. \"Preventing repeated real world AI failures by cataloging incidents: The AI incident database.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 17. 2021. [URL](https:\/\/arxiv.org\/pdf\/2011.08512.pdf)\n\nMacrae, Carl. \"Learning from the failure of autonomous and intelligent systems: Accidents, safety, and sociotechnical sources of risk.\" Risk analysis 42.9 (2022): 1999-2025. [URL](https:\/\/onlinelibrary.wiley.com\/doi\/epdf\/10.1111\/risa.13850)",
"AI Actors":[
"AI Deployment",
"Operation and Monitoring",
"End-Users",
"Human Factors",
"Domain Experts",
"Affected Individuals and Communities"
],
"Topic":[
"AI Incidents",
"Monitoring"
]
},
{
"type":"Map",
"title":"MAP 1.1",
"category":"MAP-1",
"description":"Intended purpose, potentially beneficial uses, context-specific laws, norms and expectations, and prospective settings in which the AI system will be deployed are understood and documented. Considerations include: specific set or types of users along with their expectations; potential positive and negative impacts of system uses to individuals, communities, organizations, society, and the planet; assumptions and related limitations about AI system purposes; uses and risks across the development or product AI lifecycle; TEVV and system metrics.",
"section_about":"Highly accurate and optimized systems can cause harm. Relatedly, organizations should expect broadly deployed AI tools to be reused, repurposed, and potentially misused regardless of intentions. \n\nAI actors can work collaboratively, and with external parties such as community groups, to help delineate the bounds of acceptable deployment, consider preferable alternatives, and identify principles and strategies to manage likely risks. Context mapping is the first step in this effort, and may include examination of the following: \n\n* intended purpose and impact of system use. \n* concept of operations. \n* intended, prospective, and actual deployment setting. \n* requirements for system deployment and operation. \n* end user and operator expectations. \n* specific set or types of end users. \n* potential negative impacts to individuals, groups, communities, organizations, and society \u2013 or context-specific impacts such as legal requirements or impacts to the environment. \n* unanticipated, downstream, or other unknown contextual factors.\n* how AI system changes connect to impacts. \n\nThese types of processes can assist AI actors in understanding how limitations, constraints, and other realities associated with the deployment and use of AI technology can create impacts once they are deployed or operate in the real world. When coupled with the enhanced organizational culture resulting from the established policies and procedures in the Govern function, the Map function can provide opportunities to foster and instill new perspectives, activities, and skills for approaching risks and impacts. \n\nContext mapping also includes discussion and consideration of non-AI or non-technology alternatives especially as related to whether the given context is narrow enough to manage AI and its potential negative impacts. Non-AI alternatives may include capturing and evaluating information using semi-autonomous or mostly-manual methods.",
"section_actions":"- Maintain awareness of industry, technical, and applicable legal standards.\n- Examine trustworthiness of AI system design and consider, non-AI solutions \n- Consider intended AI system design tasks along with unanticipated purposes in collaboration with human factors and socio-technical domain experts.\n- Define and document the task, purpose, minimum functionality, and benefits of the AI system to inform considerations about whether the utility of the project or its lack of.\n- Identify whether there are non-AI or non-technology alternatives that will lead to more trustworthy outcomes. \n- Examine how changes in system performance affect downstream events such as decision-making (e.g: changes in an AI model objective function create what types of impacts in how many candidates do\/do not get a job interview). \n- Determine the end user and organizational requirements, including business and technical requirements.\n- Determine and delineate the expected and acceptable AI system context of use, including:\n - social norms\n - Impacted individuals, groups, and communities\n - potential positive and negative impacts to individuals, groups, communities, organizations, and society\n - operational environment\n- Perform context analysis related to time frame, safety concerns, geographic area, physical environment, ecosystems, social environment, and cultural norms within the intended setting (or conditions that closely approximate the intended setting.\n- Gain and maintain awareness about evaluating scientific claims related to AI system performance and benefits before launching into system design.\n- Identify human-AI interaction and\/or roles, such as whether the application will support or replace human decision making.\n- Plan for risks related to human-AI configurations, and document requirements, roles, and responsibilities for human oversight of deployed systems.",
"section_doc":"### Organizations can document the following\n- To what extent is the output of each component appropriate for the operational context?\n- Which AI actors are responsible for the decisions of the AI and is this person aware of the intended uses and limitations of the analytic?\n- Which AI actors are responsible for maintaining, re-verifying, monitoring, and updating this AI once deployed?\n- Who is the person(s) accountable for the ethical considerations across the AI lifecycle?\n\n### AI Transparency Resources\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities, [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- \u201cStakeholders in Explainable AI,\u201d Sep. 2018. [URL](http:\/\/arxiv.org\/abs\/1810.00184)\n- \"Microsoft Responsible AI Standard, v2\". [URL](https:\/\/query.prod.cms.rt.microsoft.com\/cms\/api\/am\/binary\/RE4ZPmV)",
"section_ref":"### Socio-technical systems\n\nAndrew D. Selbst, danah boyd, Sorelle A. Friedler, et al. 2019. Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT'19). Association for Computing Machinery, New York, NY, USA, 59\u201368. [URL](https:\/\/doi.org\/10.1145\/3287560.3287598)\n\n### Problem formulation\n\nRoel Dobbe, Thomas Krendl Gilbert, and Yonatan Mintz. 2021. Hard choices in artificial intelligence. Artificial Intelligence 300 (14 July 2021), 103555, ISSN 0004-3702. [URL](https:\/\/doi.org\/10.1016\/j.artint.2021.103555)\n\nSamir Passi and Solon Barocas. 2019. Problem Formulation and Fairness. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT'19). Association for Computing Machinery, New York, NY, USA, 39\u201348. [URL](https:\/\/doi.org\/10.1145\/3287560.3287567)\n\n### Context mapping\n\nEmilio G\u00f3mez-Gonz\u00e1lez and Emilia G\u00f3mez. 2020. Artificial intelligence in medicine and healthcare. Joint Research Centre (European Commission). [URL](https:\/\/op.europa.eu\/en\/publication-detail\/-\/publication\/b4b5db47-94c0-11ea-aac4-01aa75ed71a1\/language-en)\n\nSarah Spiekermann and Till Winkler. 2020. Value-based Engineering for Ethics by Design. arXiv:2004.13676. [URL](https:\/\/arxiv.org\/abs\/2004.13676)\n\nSocial Impact Lab. 2017. Framework for Context Analysis of Technologies in Social Change Projects (Draft v2.0). [URL](https:\/\/www.alnap.org\/system\/files\/content\/resource\/files\/main\/Draft%20SIMLab%20Context%20Analysis%20Framework%20v2.0.pdf)\n\nSolon Barocas, Asia J. Biega, Margarita Boyarskaya, et al. 2021. Responsible computing during COVID-19 and beyond. Commun. ACM 64, 7 (July 2021), 30\u201332. [URL](https:\/\/doi.org\/10.1145\/3466612)\n\n### Identification of harms\n\nHarini Suresh and John V. Guttag. 2020. A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. arXiv:1901.10002. [URL](https:\/\/arxiv.org\/abs\/1901.10002)\n\nMargarita Boyarskaya, Alexandra Olteanu, and Kate Crawford. 2020. Overcoming Failures of Imagination in AI Infused System Development and Deployment. arXiv:2011.13416. [URL](https:\/\/arxiv.org\/abs\/2011.13416)\n\nMicrosoft. Foundations of assessing harm. 2022. [URL](https:\/\/docs.microsoft.com\/en-us\/azure\/architecture\/guide\/responsible-innovation\/harms-modeling\/)\n\n### Understanding and documenting limitations in ML\n\nAlexander D'Amour, Katherine Heller, Dan Moldovan, et al. 2020. Underspecification Presents Challenges for Credibility in Modern Machine Learning. arXiv:2011.03395. [URL](https:\/\/arxiv.org\/abs\/2011.03395)\n\nArvind Narayanan. \"How to Recognize AI Snake Oil.\" Arthur Miller Lecture on Science and Ethics (2019). [URL](https:\/\/www.cs.princeton.edu\/~arvindn\/talks\/MIT-STS-AI-snakeoil.pdf)\n\nJessie J. Smith, Saleema Amershi, Solon Barocas, et al. 2022. REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research. arXiv:2205.08363. [URL](https:\/\/arxiv.org\/abs\/2205.08363)\n\nMargaret Mitchell, Simone Wu, Andrew Zaldivar, et al. 2019. Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). Association for Computing Machinery, New York, NY, USA, 220\u2013229. [URL](https:\/\/doi.org\/10.1145\/3287560.3287596)\n\nMatthew Arnold, Rachel K. E. Bellamy, Michael Hind, et al. 2019. FactSheets: Increasing Trust in AI Services through Supplier's Declarations of Conformity. arXiv:1808.07261. [URL](https:\/\/arxiv.org\/abs\/1808.07261)\n\nMatthew J. Salganik, Ian Lundberg, Alexander T. Kindel, Caitlin E. Ahearn, Khaled Al-Ghoneim, Abdullah Almaatouq, Drew M. Altschul et al. \"Measuring the Predictability of Life Outcomes with a Scientific Mass Collaboration.\" Proceedings of the National Academy of Sciences 117, No. 15 (2020): 8398-8403. [URL](https:\/\/www.pnas.org\/doi\/10.1073\/pnas.1915006117)\n\nMichael A. Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. 2020. Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI \u201820). Association for Computing Machinery, New York, NY, USA, 1\u201314. [URL](https:\/\/doi.org\/10.1145\/3313831.3376445)\n\nTimnit Gebru, Jamie Morgenstern, Briana Vecchione, et al. 2021. Datasheets for Datasets. arXiv:1803.09010. [URL](https:\/\/arxiv.org\/abs\/1803.09010)\n\nBender, E. M., Friedman, B. & McMillan-Major, A., (2022). A Guide for Writing Data Statements for Natural Language Processing. University of Washington. Accessed July 14, 2022. [URL](https:\/\/techpolicylab.uw.edu\/wp-content\/uploads\/2021\/11\/Data_Statements_Guide_V2.pdf)\n\nMeta AI. System Cards, a new resource for understanding how AI systems work, 2021. [URL](https:\/\/ai.facebook.com\/blog\/system-cards-a-new-resource-for-understanding-how-ai-systems-work\/)\n\n### When not to deploy\n\nSolon Barocas, Asia J. Biega, Benjamin Fish, et al. 2020. When not to design, build, or deploy. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* '20). Association for Computing Machinery, New York, NY, USA, 695. [URL](https:\/\/doi.org\/10.1145\/3351095.3375691)\n\n### Statistical balance\n\nZiad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (25 Oct. 2019), 447-453. [URL](https:\/\/doi.org\/10.1126\/science.aax2342)\n\n### Assessment of science in AI\n\nArvind Narayanan. How to recognize AI snake oil. [URL](https:\/\/www.cs.princeton.edu\/~arvindn\/talks\/MIT-STS-AI-snakeoil.pdf)\n\nEmily M. Bender. 2022. On NYT Magazine on AI: Resist the Urge to be Impressed. (April 17, 2022). [URL](https:\/\/medium.com\/@emilymenonbender\/on-nyt-magazine-on-ai-resist-the-urge-to-be-impressed-3d92fd9a0edd)",
"AI Actors":[
],
"Topic":[
"Socio-technical systems",
"Societal Values",
"Context of Use",
"Impact Assessment",
"TEVV",
"Trustworthy Characteristics",
"Validity and Reliability",
"Safety",
"Secure and Resilient",
"Accountability and Transparency",
"Explainability and Interpretability",
"Privacy",
"Fairness and Bias"
]
},
{
"type":"Map",
"title":"MAP 1.2",
"category":"MAP-1",
"description":"Inter-disciplinary AI actors, competencies, skills and capacities for establishing context reflect demographic diversity and broad domain and user experience expertise, and their participation is documented. Opportunities for interdisciplinary collaboration are prioritized.",
"section_about":"Successfully mapping context requires a team of AI actors with a diversity of experience, expertise, abilities and backgrounds, and with the resources and independence to engage in critical inquiry.\n\nHaving a diverse team contributes to more broad and open sharing of ideas and assumptions about the purpose and function of the technology being designed and developed \u2013 making these implicit aspects more explicit. The benefit of a diverse staff in managing AI risks is not the beliefs or presumed beliefs of individual workers, but the behavior that results from a collective perspective. An environment which fosters critical inquiry creates opportunities to surface problems and identify existing and emergent risks.",
"section_actions":"- Establish interdisciplinary teams to reflect a wide range of skills, competencies, and capabilities for AI efforts. Verify that team membership includes demographic diversity, broad domain expertise, and lived experiences. Document team composition.\n- Create and empower interdisciplinary expert teams to capture, learn, and engage the interdependencies of deployed AI systems and related terminologies and concepts from disciplines outside of AI practice such as law, sociology, psychology, anthropology, public policy, systems design, and engineering.",
"section_doc":"### Organizations can document the following\n- To what extent do the teams responsible for developing and maintaining the AI system reflect diverse opinions, backgrounds, experiences, and perspectives?\n- Did the entity document the demographics of those involved in the design and development of the AI system to capture and communicate potential biases inherent to the development process, according to forum participants?\n- What specific perspectives did stakeholders share, and how were they integrated across the design, development, deployment, assessment, and monitoring of the AI system?\n- To what extent has the entity addressed stakeholder perspectives on the potential negative impacts of the AI system on end users and impacted populations?\n- What type of information is accessible on the design, operations, and limitations of the AI system to external stakeholders, including end users, consumers, regulators, and individuals impacted by use of the AI system?\n- Did your organization address usability problems and test whether user interfaces served their intended purposes? Consulting the community or end users at the earliest stages of development to ensure there is transparency on the technology used and how it is deployed.\n\n### AI Transparency Resources\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- WEF Model AI Governance Framework Assessment 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGModelAIGovFramework2.pdf)\n- WEF Companion to the Model AI Governance Framework- 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGIsago.pdf)\n- AI policies and initiatives, in Artificial Intelligence in Society, OECD, 2019. [URL](https:\/\/www.oecd.org\/publications\/artificial-intelligence-in-society-eedfee77-en.htm)",
"section_ref":"Sina Fazelpour and Maria De-Arteaga. 2022. Diversity in sociotechnical machine learning systems. Big Data & Society 9, 1 (Jan. 2022). [URL](https:\/\/doi.org\/10.1177%2F20539517221082027)\n\nMicrosoft Community Jury , Azure Application Architecture Guide. [URL](https:\/\/docs.microsoft.com\/en-us\/azure\/architecture\/guide\/responsible-innovation\/community-jury\/)\n\nFernando Delgado, Stephen Yang, Michael Madaio, Qian Yang. (2021). Stakeholder Participation in AI: Beyond \"Add Diverse Stakeholders and Stir\". [URL](https:\/\/deepai.org\/publication\/stakeholder-participation-in-ai-beyond-add-diverse-stakeholders-and-stir)\n\nKush Varshney, Tina Park, Inioluwa Deborah Raji, Gaurush Hiranandani, Narasimhan Harikrishna, Oluwasanmi Koyejo, Brianna Richardson, and Min Kyung Lee. Participatory specification of trustworthy machine learning, 2021.\n\nDonald Martin, Vinodkumar Prabhakaran, Jill A. Kuhlberg, Andrew Smart and William S. Isaac. \u201cParticipatory Problem Formulation for Fairer Machine Learning Through Community Based System Dynamics\u201d, ArXiv abs\/2005.07572 (2020). [URL](https:\/\/arxiv.org\/pdf\/2005.07572.pdf)",
"AI Actors":[
],
"Topic":[
"Diversity",
"Interdisciplinarity",
"Socio-technical systems"
]
},
{
"type":"Map",
"title":"MAP 1.3",
"category":"MAP-1",
"description":"The organization\u2019s mission and relevant goals for the AI technology are understood and documented.",
"section_about":"Defining and documenting the specific business purpose of an AI system in a broader context of societal values helps teams to evaluate risks and increases the clarity of \u201cgo\/no-go\u201d decisions about whether to deploy.\n\nTrustworthy AI technologies may present a demonstrable business benefit beyond implicit or explicit costs, provide added value, and don't lead to wasted resources. Organizations can feel confident in performing risk avoidance if the implicit or explicit risks outweigh the advantages of AI systems, and not implementing an AI solution whose risks surpass potential benefits.\n\nFor example, making AI systems more equitable can result in better managed risk, and can help enhance consideration of the business value of making inclusively designed, accessible and more equitable AI systems.",
"section_actions":"- Build transparent practices into AI system development processes.\n- Review the documented system purpose from a socio-technical perspective and in consideration of societal values.\n- Determine possible misalignment between societal values and stated organizational principles and code of ethics.\n- Flag latent incentives that may contribute to negative impacts.\n- Evaluate AI system purpose in consideration of potential risks, societal values, and stated organizational principles.",
"section_doc":"### Organizations can document the following\n- How does the AI system help the entity meet its goals and objectives?\n- How do the technical specifications and requirements align with the AI system\u2019s goals and objectives?\n- To what extent is the output appropriate for the operational context?\n\n### AI Transparency Resources\n- Assessment List for Trustworthy AI (ALTAI) - The High-Level Expert Group on AI \u2013 2019, [LINK](https:\/\/altai.insight-centre.org\/), [URL](https:\/\/digital-strategy.ec.europa.eu\/en\/library\/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment).\n- Including Insights from the Comptroller General\u2019s Forum on the Oversight of Artificial Intelligence An Accountability Framework for Federal Agencies and Other Entities, 2021, [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp), [PDF](https:\/\/www.gao.gov\/assets\/gao-21-519sp-highlights.pdf).",
"section_ref":"M.S. Ackerman (2000). The Intellectual Challenge of CSCW: The Gap Between Social Requirements and Technical Feasibility. Human\u2013Computer Interaction, 15, 179 - 203. [URL](https:\/\/socialworldsresearch.org\/sites\/default\/files\/hci.final_.pdf)\n\nMcKane Andrus, Sarah Dean, Thomas Gilbert, Nathan Lambert, Tom Zick (2021). AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks. [URL](https:\/\/arxiv.org\/pdf\/2102.04255.pdf)\n\nAbeba Birhane, Pratyusha Kalluri, Dallas Card, et al. 2022. The Values Encoded in Machine Learning Research. arXiv:2106.15590. [URL](https:\/\/arxiv.org\/abs\/2106.15590)\n\nBoard of Governors of the Federal Reserve System. SR 11-7: Guidance on Model Risk Management. (April 4, 2011). [URL](https:\/\/www.federalreserve.gov\/supervisionreg\/srletters\/sr1107.htm)\n\nIason Gabriel, Artificial Intelligence, Values, and Alignment. Minds & Machines 30, 411\u2013437 (2020). [URL](https:\/\/doi.org\/10.1007\/s11023-020-09539-2)\n\nPEAT \u201cBusiness Case for Equitable AI\u201d. [URL](https:\/\/www.peatworks.org\/ai-disability-inclusion-toolkit\/business-case-for-equitable-ai\/)",
"AI Actors":[
],
"Topic":[
"Socio-technical systems",
"Societal Values"
]
},
{
"type":"Map",
"title":"MAP 1.4",
"category":"MAP-1",
"description":"The business value or context of business use has been clearly defined or \u2013 in the case of assessing existing AI systems \u2013 re-evaluated.",
"section_about":"Socio-technical AI risks emerge from the interplay between technical development decisions and how a system is used, who operates it, and the social context into which it is deployed. Addressing these risks is complex and requires a commitment to understanding how contextual factors may interact with AI lifecycle actions. One such contextual factor is how organizational mission and identified system purpose create incentives within AI system design, development, and deployment tasks that may result in positive and negative impacts. By establishing comprehensive and explicit enumeration of AI systems\u2019 context of of business use and expectations, organizations can identify and manage these types of risks.",
"section_actions":"- Document business value or context of business use \n- Reconcile documented concerns about the system\u2019s purpose within the business context of use compared to the organization\u2019s stated values, mission statements, social responsibility commitments, and AI principles.\n- Reconsider the design, implementation strategy, or deployment of AI systems with potential impacts that do not reflect institutional values.",
"section_doc":"### Organizations can document the following\n- What goals and objectives does the entity expect to achieve by designing, developing, and\/or deploying the AI system?\n- To what extent are the system outputs consistent with the entity\u2019s values and principles to foster public trust and equity?\n- To what extent are the metrics consistent with system goals, objectives, and constraints, including ethical and compliance considerations?\n\n### AI Transparency Resources\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- Intel.gov: AI Ethics Framework for Intelligence Community - 2020. [URL](https:\/\/www.intelligence.gov\/artificial-intelligence-ethics-framework-for-the-intelligence-community)\n- WEF Model AI Governance Framework Assessment 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGModelAIGovFramework2.pdf)",
"section_ref":"Algorithm Watch. AI Ethics Guidelines Global Inventory. [URL](https:\/\/inventory.algorithmwatch.org\/)\n\nEthical OS toolkit. [URL](https:\/\/ethicalos.org\/)\n\nEmanuel Moss and Jacob Metcalf. 2020. Ethics Owners: A New Model of Organizational Responsibility in Data-Driven Technology Companies. Data & Society Research Institute. [URL](https:\/\/datasociety.net\/pubs\/Ethics-Owners.pdf)\n\nFuture of Life Institute. Asilomar AI Principles. [URL](https:\/\/futureoflife.org\/2017\/08\/11\/ai-principles\/)\n\nLeonard Haas, Sebastian Gie\u00dfler, and Veronika Thiel. 2020. In the realm of paper tigers \u2013 exploring the failings of AI ethics guidelines. (April 28, 2020). [URL](https:\/\/algorithmwatch.org\/en\/ai-ethics-guidelines-inventory-upgrade-2020\/)",
"AI Actors":[
],
"Topic":[
"Context of Use"
]
},
{
"type":"Map",
"title":"MAP 1.5",
"category":"MAP-1",
"description":"Organizational risk tolerances are determined and documented.",
"section_about":"Risk tolerance reflects the level and type of risk the organization is willing to accept while conducting its mission and carrying out its strategy.\n\nOrganizations can follow existing regulations and guidelines for risk criteria, tolerance and response established by organizational, domain, discipline, sector, or professional requirements. Some sectors or industries may have established definitions of harm or may have established documentation, reporting, and disclosure requirements. \n\nWithin sectors, risk management may depend on existing guidelines for specific applications and use case settings. Where established guidelines do not exist, organizations will want to define reasonable risk tolerance in consideration of different sources of risk (e.g., financial, operational, safety and wellbeing, business, reputational, and model risks) and different levels of risk (e.g., from negligible to critical).\n\nRisk tolerances inform and support decisions about whether to continue with development or deployment - termed \u201cgo\/no-go\u201d. Go\/no-go decisions related to AI system risks can take stakeholder feedback into account, but remain independent from stakeholders\u2019 vested financial or reputational interests.\n\nIf mapping risk is prohibitively difficult, a \"no-go\" decision may be considered for the specific system.",
"section_actions":"- Utilize existing regulations and guidelines for risk criteria, tolerance and response established by organizational, domain, discipline, sector, or professional requirements.\n- Establish risk tolerance levels for AI systems and allocate the appropriate oversight resources to each level. \n- Establish risk criteria in consideration of different sources of risk, (e.g., financial, operational, safety and wellbeing, business, reputational, and model risks) and different levels of risk (e.g., from negligible to critical). \n- Identify maximum allowable risk tolerance above which the system will not be deployed, or will need to be prematurely decommissioned, within the contextual or application setting.\n- Articulate and analyze tradeoffs across trustworthiness characteristics as relevant to proposed context of use. When tradeoffs arise, document them and plan for traceable actions (e.g.: impact mitigation, removal of system from development or use) to inform management decisions. \n- Review uses of AI systems for \u201coff-label\u201d purposes, especially in settings that organizations have deemed as high-risk. Document decisions, risk-related trade-offs, and system limitations.",
"section_doc":"### Organizations can document the following\n- Which existing regulations and guidelines apply, and the entity has followed, in the development of system risk tolerances?\n- What criteria and assumptions has the entity utilized when developing system risk tolerances? \n- How has the entity identified maximum allowable risk tolerance?\n- What conditions and purposes are considered \u201coff-label\u201d for system use?\n\n### AI Transparency Resources\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- WEF Model AI Governance Framework Assessment 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGModelAIGovFramework2.pdf)\n- WEF Companion to the Model AI Governance Framework- 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGIsago.pdf)",
"section_ref":"Board of Governors of the Federal Reserve System. SR 11-7: Guidance on Model Risk Management. (April 4, 2011). [URL](https:\/\/www.federalreserve.gov\/supervisionreg\/srletters\/sr1107.htm)\n\nThe Office of the Comptroller of the Currency. Enterprise Risk Appetite Statement. (Nov. 20, 2019). [URL](https:\/\/www.occ.treas.gov\/publications-and-resources\/publications\/banker-education\/files\/pub-risk-appetite-statement.pdf)\n\nBrenda Boultwood, How to Develop an Enterprise Risk-Rating Approach (Aug. 26, 2021). Global Association of Risk Professionals (garp.org). Accessed Jan. 4, 2023. [URL](https:\/\/www.garp.org\/risk-intelligence\/culture-governance\/how-to-develop-an-enterprise-risk-rating-approach)\n\nVirginia Eubanks, 1972-, Automating Inequality: How High-tech Tools Profile, Police, and Punish the Poor. New York, NY, St. Martin's Press, 2018.\n\nGAO-17-63: Enterprise Risk Management: Selected Agencies\u2019 Experiences Illustrate Good Practices in Managing Risk. [URL](https:\/\/www.gao.gov\/assets\/gao-17-63.pdf) See Table 3.\n\nNIST Risk Management Framework. [URL](https:\/\/csrc.nist.gov\/projects\/risk-management\/about-rmf)",
"AI Actors":[
],
"Topic":[
"Risk Tolerance"
]
},
{
"type":"Map",
"title":"MAP 1.6",
"category":"MAP-1",
"description":"System requirements (e.g., \u201cthe system shall respect the privacy of its users\u201d) are elicited from and understood by relevant AI actors. Design decisions take socio-technical implications into account to address AI risks.",
"section_about":"AI system development requirements may outpace documentation processes for traditional software. When written requirements are unavailable or incomplete, AI actors may inadvertently overlook business and stakeholder needs, over-rely on implicit human biases such as confirmation bias and groupthink, and maintain exclusive focus on computational requirements. \n\nEliciting system requirements, designing for end users, and considering societal impacts early in the design phase is a priority that can enhance AI systems\u2019 trustworthiness.",
"section_actions":"- Proactively incorporate trustworthy characteristics into system requirements.\n- Establish mechanisms for regular communication and feedback between relevant AI actors and internal or external stakeholders related to system design or deployment decisions.\n- Develop and standardize practices to assess potential impacts at all stages of the AI lifecycle, and in collaboration with interdisciplinary experts, actors external to the team that developed or deployed the AI system, and potentially impacted communities . \n- Include potentially impacted groups, communities and external entities (e.g. civil society organizations, research institutes, local community groups, and trade associations) in the formulation of priorities, definitions and outcomes during impact assessment activities. \n- Conduct qualitative interviews with end user(s) to regularly evaluate expectations and design plans related to Human-AI configurations and tasks.\n- Analyze dependencies between contextual factors and system requirements. List potential impacts that may arise from not fully considering the importance of trustworthiness characteristics in any decision making.\n- Follow responsible design techniques in tasks such as software engineering, product management, and participatory engagement. Some examples for eliciting and documenting stakeholder requirements include product requirement documents (PRDs), user stories, user interaction\/user experience (UI\/UX) research, systems engineering, ethnography and related field methods.\n- Conduct user research to understand individuals, groups and communities that will be impacted by the AI, their values & context, and the role of systemic and historical biases. Integrate learnings into decisions about data selection and representation.",
"section_doc":"### Organizations can document the following\n- What type of information is accessible on the design, operations, and limitations of the AI system to external stakeholders, including end users, consumers, regulators, and individuals impacted by use of the AI system?\n- To what extent is this information sufficient and appropriate to promote transparency? Promote transparency by enabling external stakeholders to access information on the design, operation, and limitations of the AI system.\n- To what extent has relevant information been disclosed regarding the use of AI systems, such as (a) what the system is for, (b) what it is not for, (c) how it was designed, and (d) what its limitations are? (Documentation and external communication can offer a way for entities to provide transparency.)\n- How will the relevant AI actor(s) address changes in accuracy and precision due to either an adversary\u2019s attempts to disrupt the AI system or unrelated changes in the operational\/business environment, which may impact the accuracy of the AI system?\n- What metrics has the entity developed to measure performance of the AI system?\n- What justifications, if any, has the entity provided for the assumptions, boundaries, and limitations of the AI system?\n\n### AI Transparency Resources\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- Stakeholders in Explainable AI, Sep. 2018. [URL]( http:\/\/arxiv.org\/abs\/1810.00184)\n- High-Level Expert Group on Artificial Intelligence set up by the European Commission, Ethics Guidelines for Trustworthy AI. [URL](https:\/\/digital-strategy.ec.europa.eu\/en\/library\/ethics-guidelines-trustworthy-ai), [PDF](https:\/\/www.aepd.es\/sites\/default\/files\/2019-12\/ai-ethics-guidelines.pdf)",
"section_ref":"National Academies of Sciences, Engineering, and Medicine 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. [URL](https:\/\/doi.org\/10.17226\/26507)\n\nAbeba Birhane, William S. Isaac, Vinodkumar Prabhakaran, Mark Diaz, Madeleine Clare Elish, Iason Gabriel and Shakir Mohamed. \u201cPower to the People? Opportunities and Challenges for Participatory AI.\u201d Equity and Access in Algorithms, Mechanisms, and Optimization (2022). [URL](https:\/\/arxiv.org\/pdf\/2209.07572.pdf)\n\nAmit K. Chopra, Fabiano Dalpiaz, F. Ba\u015fak Aydemir, et al. 2014. Protos: Foundations for engineering innovative sociotechnical systems. In 2014 IEEE 22nd International Requirements Engineering Conference (RE) (2014), 53-62. [URL](https:\/\/doi.org\/10.1109\/RE.2014.6912247)\n\nAndrew D. Selbst, danah boyd, Sorelle A. Friedler, et al. 2019. Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). Association for Computing Machinery, New York, NY, USA, 59\u201368. [URL](https:\/\/doi.org\/10.1145\/3287560.3287598)\n\nGordon Baxter and Ian Sommerville. 2011. Socio-technical systems: From design methods to systems engineering. Interacting with Computers, 23, 1 (Jan. 2011), 4\u201317. [URL](https:\/\/doi.org\/10.1016\/j.intcom.2010.07.003)\n\nRoel Dobbe, Thomas Krendl Gilbert, and Yonatan Mintz. 2021. Hard choices in artificial intelligence. Artificial Intelligence 300 (14 July 2021), 103555, ISSN 0004-3702. [URL](https:\/\/doi.org\/10.1016\/j.artint.2021.103555)\n\nYilin Huang, Giacomo Poderi, Sanja \u0160\u0107epanovi\u0107, et al. 2019. Embedding Internet-of-Things in Large-Scale Socio-technical Systems: A Community-Oriented Design in Future Smart Grids. In The Internet of Things for Smart Urban Ecosystems (2019), 125-150. Springer, Cham. [URL](https:\/\/doi.org\/10.1007\/978-3-319-96550-5_6)\n\nVictor Udoewa, (2022). An introduction to radical participatory design: decolonising participatory design processes. Design Science. 8. 10.1017\/dsj.2022.24. [URL](https:\/\/www.cambridge.org\/core\/journals\/design-science\/article\/an-introduction-to-radical-participatory-design-decolonising-participatory-design-processes\/63F70ECC408844D3CD6C1A5AC7D35F4D)",
"AI Actors":[
],
"Topic":[
"Socio-technical systems",
"Impact Assessment",
"Documentation"
]
},
{
"type":"Map",
"title":"MAP 2.1",
"category":"MAP-2",
"description":"The specific task, and methods used to implement the task, that the AI system will support is defined (e.g., classifiers, generative models, recommenders).",
"section_about":"AI actors define the technical learning or decision-making task(s) an AI system is designed to accomplish, or the benefits that the system will provide. The clearer and narrower the task definition, the easier it is to map its benefits and risks, leading to more fulsome risk management.",
"section_actions":"- Define and document AI system\u2019s existing and potential learning task(s) along with known assumptions and limitations.",
"section_doc":"### Organizations can document the following\n\n- To what extent has the entity clearly defined technical specifications and requirements for the AI system?\n- To what extent has the entity documented the AI system\u2019s development, testing methodology, metrics, and performance outcomes?\n- How do the technical specifications and requirements align with the AI system\u2019s goals and objectives?\n- Did your organization implement accountability-based practices in data management and protection (e.g. the PDPA and OECD Privacy Principles)?\n- How are outputs marked to clearly show that they came from an AI?\n\n### AI Transparency Resources\n\n- Datasheets for Datasets. [URL](http:\/\/arxiv.org\/abs\/1803.09010)\n- WEF Model AI Governance Framework Assessment 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGModelAIGovFramework2.pdf)\n- WEF Companion to the Model AI Governance Framework- 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGIsago.pdf)\n- ATARC Model Transparency Assessment (WD) \u2013 2020. [URL](https:\/\/atarc.org\/wp-content\/uploads\/2020\/10\/atarc_model_transparency_assessment-FINAL-092020-2.docx)\n- Transparency in Artificial Intelligence - S. Larsson and F. Heintz \u2013 2020. [URL](https:\/\/lucris.lub.lu.se\/ws\/files\/79208055\/Larsson_Heintz_2020_Transparency_in_artificial_intelligence_2020_05_05.pdf)",
"section_ref":"Leong, Brenda (2020). The Spectrum of Artificial Intelligence - An Infographic Tool. Future of Privacy Forum. [URL](https:\/\/fpf.org\/blog\/the-spectrum-of-artificial-intelligence-an-infographic-tool\/)\n\nBrownlee, Jason (2020). A Tour of Machine Learning Algorithms. Machine Learning Mastery. [URL](https:\/\/machinelearningmastery.com\/a-tour-of-machine-learning-algorithms\/)",
"AI Actors":[
],
"Topic":[
"Socio-technical systems"
]
},
{
"type":"Map",
"title":"MAP 2.2",
"category":"MAP-2",
"description":"Information about the AI system\u2019s knowledge limits and how system output may be utilized and overseen by humans is documented. Documentation provides sufficient information to assist relevant AI actors when making informed decisions and taking subsequent actions.",
"section_about":"An AI lifecycle consists of many interdependent activities involving a diverse set of actors that often do not have full visibility or control over other parts of the lifecycle and its associated contexts or risks. The interdependencies between these activities, and among the relevant AI actors and organizations, can make it difficult to reliably anticipate potential impacts of AI systems. For example, early decisions in identifying the purpose and objective of an AI system can alter its behavior and capabilities, and the dynamics of deployment setting (such as end users or impacted individuals) can shape the positive or negative impacts of AI system decisions. As a result, the best intentions within one dimension of the AI lifecycle can be undermined via interactions with decisions and conditions in other, later activities. This complexity and varying levels of visibility can introduce uncertainty. And, once deployed and in use, AI systems may sometimes perform poorly, manifest unanticipated negative impacts, or violate legal or ethical norms. These risks and incidents can result from a variety of factors. For example, downstream decisions can be influenced by end user over-trust or under-trust, and other complexities related to AI-supported decision-making.\n\nAnticipating, articulating, assessing and documenting AI systems\u2019 knowledge limits and how system output may be utilized and overseen by humans can help mitigate the uncertainty associated with the realities of AI system deployments. Rigorous design processes include defining system knowledge limits, which are confirmed and refined based on TEVV processes.",
"section_actions":"- Document settings, environments and conditions that are outside the AI system\u2019s intended use. \n- Design for end user workflows and toolsets, concept of operations, and explainability and interpretability criteria in conjunction with end user(s) and associated qualitative feedback.\n- Plan and test human-AI configurations under close to real-world conditions and document results.\n- Follow stakeholder feedback processes to determine whether a system achieved its documented purpose within a given use context, and whether end users can correctly comprehend system outputs or results.\n- Document dependencies on upstream data and other AI systems, including if the specified system is an upstream dependency for another AI system or other data.\n- Document connections the AI system or data will have to external networks (including the internet), financial markets, and critical infrastructure that have potential for negative externalities. Identify and document negative impacts as part of considering the broader risk thresholds and subsequent go\/no-go deployment as well as post-deployment decommissioning decisions.",
"section_doc":"### Organizations can document the following\n- Does the AI system provide sufficient information to assist the personnel to make an informed decision and take actions accordingly?\n- What type of information is accessible on the design, operations, and limitations of the AI system to external stakeholders, including end users, consumers, regulators, and individuals impacted by use of the AI system?\n- Based on the assessment, did your organization implement the appropriate level of human involvement in AI-augmented decision-making? \n\n### AI Transparency Resources\n- Datasheets for Datasets. [URL](http:\/\/arxiv.org\/abs\/1803.09010)\n- WEF Model AI Governance Framework Assessment 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGModelAIGovFramework2.pdf)\n- WEF Companion to the Model AI Governance Framework- 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGIsago.pdf)\n- ATARC Model Transparency Assessment (WD) \u2013 2020. [URL](https:\/\/atarc.org\/wp-content\/uploads\/2020\/10\/atarc_model_transparency_assessment-FINAL-092020-2.docx)\n- Transparency in Artificial Intelligence - S. Larsson and F. Heintz \u2013 2020. [URL](https:\/\/lucris.lub.lu.se\/ws\/files\/79208055\/Larsson_Heintz_2020_Transparency_in_artificial_intelligence_2020_05_05.pdf)",
"section_ref":"### Context of use\n\nInternational Standards Organization (ISO). 2019. ISO 9241-210:2019 Ergonomics of human-system interaction \u2014 Part 210: Human-centred design for interactive systems. [URL](https:\/\/www.iso.org\/standard\/77520.html)\n\nNational Institute of Standards and Technology (NIST), Mary Theofanos, Yee-Yin Choong, et al. 2017. NIST Handbook 161 Usability Handbook for Public Safety Communications: Ensuring Successful Systems for First Responders. [URL](https:\/\/doi.org\/10.6028\/NIST.HB.161)\n\n### Human-AI interaction\n\nCommittee on Human-System Integration Research Topics for the 711th Human Performance Wing of the Air Force Research Laboratory and the National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, D.C. National Academies Press. [URL](https:\/\/nap.nationalacademies.org\/catalog\/26355\/human-ai-teaming-state-of-the-art-and-research-needs)\n\nHuman Readiness Level Scale in the System Development Process, American National Standards Institute and Human Factors and Ergonomics Society, ANSI\/HFES 400-2021\n\nMicrosoft Responsible AI Standard, v2. [URL](https:\/\/query.prod.cms.rt.microsoft.com\/cms\/api\/am\/binary\/RE4ZPmV)\n\nSaar Alon-Barkat, Madalina Busuioc, Human\u2013AI Interactions in Public Sector Decision Making: \u201cAutomation Bias\u201d and \u201cSelective Adherence\u201d to Algorithmic Advice, Journal of Public Administration Research and Theory, 2022;, muac007. [URL](https:\/\/doi.org\/10.1093\/jopart\/muac007)\n\nZana Bu\u00e7inca, Maja Barbara Malaya, and Krzysztof Z. Gajos. 2021. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making. Proc. ACM Hum.-Comput. Interact. 5, CSCW1, Article 188 (April 2021), 21 pages. [URL](https:\/\/doi.org\/10.1145\/3449287)\n\nMary L. Cummings. 2006 Automation and accountability in decision support system interface design.The Journal of Technology Studies 32(1): 23\u201331. [URL](https:\/\/scholar.lib.vt.edu\/ejournals\/JOTS\/v32\/v32n1\/pdf\/cummings.pdf)\n\nEngstrom, D. F., Ho, D. E., Sharkey, C. M., & Cu\u00e9llar, M. F. (2020). Government by algorithm: Artificial intelligence in federal administrative agencies. NYU School of Law, Public Law Research Paper, (20-54). [URL](https:\/\/www.acus.gov\/report\/government-algorithm-artificial-intelligence-federal-administrative-agencies) \n\nSusanne Gaube, Harini Suresh, Martina Raue, et al. 2021. Do as AI say: susceptibility in deployment of clinical decision-aids. npj Digital Medicine 4, Article 31 (2021). [URL](https:\/\/doi.org\/10.1038\/s41746-021-00385-9)\n\nBen Green. 2021. The Flaws of Policies Requiring Human Oversight of Government Algorithms. Computer Law & Security Review 45 (26 Apr. 2021). [URL](https:\/\/dx.doi.org\/10.2139\/ssrn.3921216)\n\nBen Green and Amba Kak. 2021. The False Comfort of Human Oversight as an Antidote to A.I. Harm. (June 15, 2021). [URL](https:\/\/slate.com\/technology\/2021\/06\/human-oversight-artificial-intelligence-laws.html)\n\nGrgi\u0107-Hla\u010da, N., Engel, C., & Gummadi, K. P. (2019). Human decision making with machine assistance: An experiment on bailing and jailing. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1-25. [URL](https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3359280)\n\nForough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, et al. 2021. Manipulating and Measuring Model Interpretability. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). Association for Computing Machinery, New York, NY, USA, Article 237, 1\u201352. [URL](https:\/\/doi.org\/10.1145\/3411764.3445315)\n\nC. J. Smith (2019). Designing trustworthy AI: A human-machine teaming framework to guide development. arXiv preprint arXiv:1910.03515. [URL](https:\/\/kilthub.cmu.edu\/articles\/conference_contribution\/Designing_Trustworthy_AI_A_Human-Machine_Teaming_Framework_to_Guide_Development\/12119847\/1)\n\nT. Warden, P. Carayon, EM et al. The National Academies Board on Human System Integration (BOHSI) Panel: Explainable AI, System Transparency, and Human Machine Teaming. Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 2019;63(1):631-635. doi:10.1177\/1071181319631100. [URL](https:\/\/sites.nationalacademies.org\/cs\/groups\/dbassesite\/documents\/webpage\/dbasse_196735.pdf)",
"AI Actors":[
],
"Topic":[
"Limitations",
"Human oversight",
"Impact Assessment",
"Documentation"
]
},
{
"type":"Map",
"title":"MAP 2.3",
"category":"MAP-2",
"description":"Scientific integrity and TEVV considerations are identified and documented, including those related to experimental design, data collection and selection (e.g., availability, representativeness, suitability), system trustworthiness, and construct validation.",
"section_about":"Standard testing and evaluation protocols provide a basis to confirm assurance in a system that it is operating as designed and claimed. AI systems\u2019 complexities create challenges for traditional testing and evaluation methodologies, which tend to be designed for static or isolated system performance. Opportunities for risk continue well beyond design and deployment, into system operation and application of system-enabled decisions. Testing and evaluation methodologies and metrics therefore address a continuum of activities. TEVV is enhanced when key metrics for performance, safety, and reliability are interpreted in a socio-technical context and not confined to the boundaries of the AI system pipeline. \n\nOther challenges for managing AI risks relate to dependence on large scale datasets, which can impact data quality and validity concerns. The difficulty of finding the \u201cright\u201d data may lead AI actors to select datasets based more on accessibility and availability than on suitability for operationalizing the phenomenon that the AI system intends to support or inform. Such decisions could contribute to an environment where the data used in processes is not fully representative of the populations or phenomena that are being modeled, introducing downstream risks. Practices such as dataset reuse may also lead to disconnect from the social contexts and time periods of their creation. This contributes to issues of validity of the underlying dataset for providing proxies, measures, or predictors within the model.",
"section_actions":"- Identify and document experiment design and statistical techniques that are valid for testing complex socio-technical systems like AI, which involve human factors, emergent properties, and dynamic context(s) of use. \n- Develop and apply TEVV protocols for models, system and its subcomponents, deployment, and operation.\n- Demonstrate and document that AI system performance and validation metrics are interpretable and unambiguous for downstream decision making tasks, and take socio-technical factors such as context of use into consideration.\n- Identify and document assumptions, techniques, and metrics used for testing and evaluation throughout the AI lifecycle including experimental design techniques for data collection, selection, and management practices in accordance with data governance policies established in GOVERN.\n- Identify testing modules that can be incorporated throughout the AI lifecycle, and verify that processes enable corroboration by independent evaluators.\n- Establish mechanisms for regular communication and feedback among relevant AI actors and internal or external stakeholders related to the validity of design and deployment assumptions. \n- Establish mechanisms for regular communication and feedback between relevant AI actors and internal or external stakeholders related to the development of TEVV approaches throughout the lifecycle to detect and assess potentially harmful impacts\n- Document assumptions made and techniques used in data selection, curation, preparation and analysis, including:\n - identification of constructs and proxy targets, \n - development of indices \u2013 especially those operationalizing concepts that are inherently unobservable (e.g. \u201chireability,\u201d \u201ccriminality.\u201d \u201clendability\u201d).\n- Map adherence to policies that address data and construct validity, bias, privacy and security for AI systems and verify documentation, oversight, and processes.\n- Identify and document transparent methods (e.g. causal discovery methods) for inferring causal relationships between constructs being modeled and dataset attributes or proxies.\n- Identify and document processes to understand and trace test and training data lineage and its metadata resources for mapping risks.\n- Document known limitations, risk mitigation efforts associated with, and methods used for, training data collection, selection, labeling, cleaning, and analysis (e.g. treatment of missing, spurious, or outlier data; biased estimators).\n- Establish and document practices to check for capabilities that are in excess of those that are planned for, such as emergent properties, and to revisit prior risk management steps in light of any new capabilities.\n- Establish processes to test and verify that design assumptions about the set of deployment contexts continue to be accurate and sufficiently complete.\n- Work with domain experts and other external AI actors to:\n - Gain and maintain contextual awareness and knowledge about how human behavior, organizational factors and dynamics, and society influence, and are represented in, datasets, processes, models, and system output.\n - Identify participatory approaches for responsible Human-AI configurations and oversight tasks, taking into account sources of cognitive bias.\n - Identify techniques to manage and mitigate sources of bias (systemic, computational, human- cognitive) in computational models and systems, and the assumptions and decisions in their development..\n- Investigate and document potential negative impacts due related to the full product lifecycle and associated processes that may conflict with organizational values and principles.",
"section_doc":"### Organizations can document the following\n- Are there any known errors, sources of noise, or redundancies in the data?\n- Over what time-frame was the data collected? Does the collection time-frame match the creation time-frame\n- What is the variable selection and evaluation process?\n- How was the data collected? Who was involved in the data collection process? If the dataset relates to people (e.g., their attributes) or was generated by people, were they informed about the data collection? (e.g., datasets that collect writing, photos, interactions, transactions, etc.)\n- As time passes and conditions change, is the training data still representative of the operational environment?\n- Why was the dataset created? (e.g., were there specific tasks in mind, or a specific gap that needed to be filled?)\n- How does the entity ensure that the data collected are adequate, relevant, and not excessive in relation to the intended purpose?\n\n### AI Transparency Resources\n- Datasheets for Datasets. [URL](http:\/\/arxiv.org\/abs\/1803.09010)\n- WEF Model AI Governance Framework Assessment 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGModelAIGovFramework2.pdf)\n- WEF Companion to the Model AI Governance Framework- 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGIsago.pdf)\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- ATARC Model Transparency Assessment (WD) \u2013 2020. [URL](https:\/\/atarc.org\/wp-content\/uploads\/2020\/10\/atarc_model_transparency_assessment-FINAL-092020-2.docx)\n- Transparency in Artificial Intelligence - S. Larsson and F. Heintz \u2013 2020. [URL](https:\/\/lucris.lub.lu.se\/ws\/files\/79208055\/Larsson_Heintz_2020_Transparency_in_artificial_intelligence_2020_05_05.pdf)",
"section_ref":"### Challenges with dataset selection\n\nAlexandra Olteanu, Carlos Castillo, Fernando Diaz, and Emre Kiciman. 2019. Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries. Front. Big Data 2, 13 (11 July 2019). [URL](https:\/\/doi.org\/10.3389\/fdata.2019.00013)\n\nAmandalynne Paullada, Inioluwa Deborah Raji, Emily M. Bender, et al. 2020. Data and its (dis)contents: A survey of dataset development and use in machine learning research. arXiv:2012.05345. [URL](https:\/\/arxiv.org\/abs\/2012.05345)\n\nCatherine D'Ignazio and Lauren F. Klein. 2020. Data Feminism. The MIT Press, Cambridge, MA. [URL](https:\/\/data-feminism.mitpress.mit.edu\/)\n\nMiceli, M., & Posada, J. (2022). The Data-Production Dispositif. ArXiv, abs\/2205.11963.\n\nBarbara Plank. 2016. What to do about non-standard (or non-canonical) language in NLP. arXiv:1608.07836. [URL](https:\/\/arxiv.org\/abs\/1608.07836)\n\n### Dataset and test, evaluation, validation and verification (TEVV) processes in AI system development\n\nNational Institute of Standards and Technology (NIST), Reva Schwartz, Apostol Vassilev, et al. 2022. NIST Special Publication 1270 Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. [URL](https:\/\/nvlpubs.nist.gov\/nistpubs\/SpecialPublications\/NIST.SP.1270.pdf)\n\nInioluwa Deborah Raji, Emily M. Bender, Amandalynne Paullada, et al. 2021. AI and the Everything in the Whole Wide World Benchmark. arXiv:2111.15366. [URL](https:\/\/arxiv.org\/abs\/2111.15366)\n\n### Statistical balance\n\nZiad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (25 Oct. 2019), 447-453. [URL](https:\/\/doi.org\/10.1126\/science.aax2342)\n\nAmandalynne Paullada, Inioluwa Deborah Raji, Emily M. Bender, et al. 2020. Data and its (dis)contents: A survey of dataset development and use in machine learning research. arXiv:2012.05345. [URL](https:\/\/arxiv.org\/abs\/2012.05345)\n\nSolon Barocas, Anhong Guo, Ece Kamar, et al. 2021. Designing Disaggregated Evaluations of AI Systems: Choices, Considerations, and Tradeoffs. Proceedings of the 2021 AAAI\/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, New York, NY, USA, 368\u2013378. [URL](https:\/\/doi.org\/10.1145\/3461702.3462610)\n\n### Measurement and evaluation\n\nAbigail Z. Jacobs and Hanna Wallach. 2021. Measurement and Fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT \u201821). Association for Computing Machinery, New York, NY, USA, 375\u2013385. [URL](https:\/\/doi.org\/10.1145\/3442188.3445901)\n\nBen Hutchinson, Negar Rostamzadeh, Christina Greer, et al. 2022. Evaluation Gaps in Machine Learning Practice. arXiv:2205.05256. [URL](https:\/\/arxiv.org\/abs\/2205.05256)\n\nLaura Freeman, \"Test and evaluation for artificial intelligence.\" Insight 23.1 (2020): 27-30. [URL](https:\/\/doi.org\/10.1002\/inst.12281)\n\n### Existing frameworks\n\nNational Institute of Standards and Technology. (2018). Framework for improving critical infrastructure cybersecurity. [URL](https:\/\/nvlpubs.nist.gov\/nistpubs\/cswp\/nist.cswp.04162018.pdf)\n\nKaitlin R. Boeckl and Naomi B. Lefkovitz. \"NIST Privacy Framework: A Tool for Improving Privacy Through Enterprise Risk Management, Version 1.0.\" National Institute of Standards and Technology (NIST), January 16, 2020. [URL](https:\/\/www.nist.gov\/publications\/nist-privacy-framework-tool-improving-privacy-through-enterprise-risk-management.)",
"AI Actors":[
"AI Development",
"TEVV",
"Domain Experts"
],
"Topic":[
"TEVV",
"Data",
"Impact Assessment",
"Limitations"
]
},
{
"type":"Map",
"title":"MAP 3.1",
"category":"MAP-3",
"description":"Potential benefits of intended AI system functionality and performance are examined and documented.",
"section_about":"AI systems have enormous potential to improve quality of life, enhance economic prosperity and security costs. Organizations are encouraged to define and document system purpose and utility, and its potential positive impacts. benefits beyond current known performance benchmarks.\n\nIt is encouraged that risk management and assessment of benefits and impacts include processes for regular and meaningful communication with potentially affected groups and communities. These stakeholders can provide valuable input related to systems\u2019 benefits and possible limitations. Organizations may differ in the types and number of stakeholders with which they engage.\n\nOther approaches such as human-centered design (HCD) and value-sensitive design (VSD) can help AI teams to engage broadly with individuals and communities. This type of engagement can enable AI teams to learn about how a given technology may cause positive or negative impacts, that were not originally considered or intended.",
"section_actions":"- Utilize participatory approaches and engage with system end users to understand and document AI systems\u2019 potential benefits, efficacy and interpretability of AI task output.\n- Maintain awareness and documentation of the individuals, groups, or communities who make up the system\u2019s internal and external stakeholders.\n- Verify that appropriate skills and practices are available in-house for carrying out participatory activities such as eliciting, capturing, and synthesizing user, operator and external feedback, and translating it for AI design and development functions.\n- Establish mechanisms for regular communication and feedback between relevant AI actors and internal or external stakeholders related to system design or deployment decisions.\n- Consider performance to human baseline metrics or other standard benchmarks.\n- Incorporate feedback from end users, and potentially impacted individuals and communities about perceived system benefits .",
"section_doc":"### Organizations can document the following\n- Have the benefits of the AI system been communicated to end users?\n- Have the appropriate training material and disclaimers about how to adequately use the AI system been provided to end users?\n- Has your organization implemented a risk management system to address risks involved in deploying the identified AI system (e.g. personnel risk or changes to commercial objectives)?\n\n### AI Transparency Resources\n- Intel.gov: AI Ethics Framework for Intelligence Community - 2020. [URL](https:\/\/www.intelligence.gov\/artificial-intelligence-ethics-framework-for-the-intelligence-community)\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- Assessment List for Trustworthy AI (ALTAI) - The High-Level Expert Group on AI \u2013 2019. [LINK](https:\/\/altai.insight-centre.org\/), [URL](https:\/\/digital-strategy.ec.europa.eu\/en\/library\/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment)",
"section_ref":"Roel Dobbe, Thomas Krendl Gilbert, and Yonatan Mintz. 2021. Hard choices in artificial intelligence. Artificial Intelligence 300 (14 July 2021), 103555, ISSN 0004-3702. [URL](https:\/\/doi.org\/10.1016\/j.artint.2021.103555)\n\nSamir Passi and Solon Barocas. 2019. Problem Formulation and Fairness. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). Association for Computing Machinery, New York, NY, USA, 39\u201348. [URL](https:\/\/doi.org\/10.1145\/3287560.3287567)\n\nVincent T. Covello. 2021. Stakeholder Engagement and Empowerment. In Communicating in Risk, Crisis, and High Stress Situations (Vincent T. Covello, ed.), 87-109. [URL](https:\/\/ieeexplore.ieee.org\/document\/9648995)\n\nYilin Huang, Giacomo Poderi, Sanja \u0160\u0107epanovi\u0107, et al. 2019. Embedding Internet-of-Things in Large-Scale Socio-technical Systems: A Community-Oriented Design in Future Smart Grids. In The Internet of Things for Smart Urban Ecosystems (2019), 125-150. Springer, Cham. [URL](https:\/\/link.springer.com\/chapter\/10.1007\/978-3-319-96550-5_6)\n\nEloise Taysom and Nathan Crilly. 2017. Resilience in Sociotechnical Systems: The Perspectives of Multiple Stakeholders. She Ji: The Journal of Design, Economics, and Innovation, 3, 3 (2017), 165-182, ISSN 2405-8726. [URL](https:\/\/www.sciencedirect.com\/science\/article\/pii\/S2405872617300758)",
"AI Actors":[
"AI Development",
"AI Deployment",
"AI Impact Assessment"
],
"Topic":[
"Socio-technical systems",
"Documentation"
]
},
{
"type":"Map",
"title":"MAP 3.2",
"category":"MAP-3",
"description":"Potential costs, including non-monetary costs, which result from expected or realized AI errors or system functionality and trustworthiness - as connected to organizational risk tolerance - are examined and documented.",
"section_about":"Anticipating negative impacts of AI systems is a difficult task. Negative impacts can be due to many factors, such as system non-functionality or use outside of its operational limits, and may range from minor annoyance to serious injury, financial losses, or regulatory enforcement actions. AI actors can work with a broad set of stakeholders to improve their capacity for understanding systems\u2019 potential impacts \u2013 and subsequently \u2013 systems\u2019 risks.",
"section_actions":"- Perform context analysis to map potential negative impacts arising from not integrating trustworthiness characteristics. When negative impacts are not direct or obvious, AI actors can engage with stakeholders external to the team that developed or deployed the AI system, and potentially impacted communities, to examine and document:\n\t- Who could be harmed?\n\t- What could be harmed?\n\t- When could harm arise?\n\t- How could harm arise?\n- Identify and implement procedures for regularly evaluating the qualitative and quantitative costs of internal and external AI system failures. Develop actions to prevent, detect, and\/or correct potential risks and related impacts. Regularly evaluate failure costs to inform go\/no-go deployment decisions throughout the AI system lifecycle.",
"section_doc":"### Organizations can document the following\n- To what extent does the system\/entity consistently measure progress towards stated goals and objectives?\n- To what extent can users or parties affected by the outputs of the AI system test the AI system and provide feedback?\n- Have you documented and explained that machine errors may differ from human errors?\n\n### AI Transparency Resources\n- Intel.gov: AI Ethics Framework for Intelligence Community - 2020. [URL](https:\/\/www.intelligence.gov\/artificial-intelligence-ethics-framework-for-the-intelligence-community)\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- Assessment List for Trustworthy AI (ALTAI) - The High-Level Expert Group on AI \u2013 2019. [LINK](https:\/\/altai.insight-centre.org\/), [URL](https:\/\/digital-strategy.ec.europa.eu\/en\/library\/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment)",
"section_ref":"Abagayle Lee Blank. 2019. Computer vision machine learning and future-oriented ethics. Honors Project. Seattle Pacific University (SPU), Seattle, WA. [URL](https:\/\/digitalcommons.spu.edu\/cgi\/viewcontent.cgi?article=1100&context=honorsprojects)\n\nMargarita Boyarskaya, Alexandra Olteanu, and Kate Crawford. 2020. Overcoming Failures of Imagination in AI Infused System Development and Deployment. arXiv:2011.13416. [URL](https:\/\/arxiv.org\/abs\/2011.13416)\n\nJeff Patton. 2014. User Story Mapping. O'Reilly, Sebastopol, CA. [URL](https:\/\/www.jpattonassociates.com\/story-mapping\/)\n\nMargarita Boenig-Liptsin, Anissa Tanweer & Ari Edmundson (2022) Data Science Ethos Lifecycle: Interplay of ethical thinking and data science practice, Journal of Statistics and Data Science Education, DOI: 10.1080\/26939169.2022.2089411\n\nJ. Cohen, D. S. Katz, M. Barker, N. Chue Hong, R. Haines and C. Jay, \"The Four Pillars of Research Software Engineering,\" in IEEE Software, vol. 38, no. 1, pp. 97-105, Jan.-Feb. 2021, doi: 10.1109\/MS.2020.2973362.\n\nNational Academies of Sciences, Engineering, and Medicine 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press. [URL](https:\/\/doi.org\/10.17226\/26507)",
"AI Actors":[
"AI Design",
"AI Development",
"Operation and Monitoring",
"AI Design",
"AI Impact Assessment"
],
"Topic":[
"Impact Assessment",
"Trustworthy Characteristics",
"Validity and Reliability",
"Safety",
"Secure and Resilient",
"Accountability and Transparency",
"Explainability and Interpretability",
"Privacy",
"Fairness and Bias"
]
},
{
"type":"Map",
"title":"MAP 3.3",
"category":"MAP-3",
"description":"Targeted application scope is specified and documented based on the system\u2019s capability, established context, and AI system categorization.",
"section_about":"Systems that function in a narrow scope tend to enable better mapping, measurement, and management of risks in the learning or decision-making tasks and the system context. A narrow application scope also helps ease TEVV functions and related resources within an organization.\n\nFor example, large language models or open-ended chatbot systems that interact with the public on the internet have a large number of risks that may be difficult to map, measure, and manage due to the variability from both the decision-making task and the operational context. Instead, a task-specific chatbot utilizing templated responses that follow a defined \u201cuser journey\u201d is a scope that can be more easily mapped, measured and managed.",
"section_actions":"- Consider narrowing contexts for system deployment, including factors related to:\n - How outcomes may directly or indirectly affect users, groups, communities and the environment.\n - Length of time the system is deployed in between re-trainings.\n - Geographical regions in which the system operates.\n - Dynamics related to community standards or likelihood of system misuse or abuses (either purposeful or unanticipated).\n - How AI system features and capabilities can be utilized within other applications, or in place of other existing processes. \n- Engage AI actors from legal and procurement functions when specifying target application scope.",
"section_doc":"### Organizations can document the following\n- To what extent has the entity clearly defined technical specifications and requirements for the AI system?\n- How do the technical specifications and requirements align with the AI system\u2019s goals and objectives?\n\n### AI Transparency Resources\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- Assessment List for Trustworthy AI (ALTAI) - The High-Level Expert Group on AI \u2013 2019. [LINK](https:\/\/altai.insight-centre.org\/), [URL](https:\/\/digital-strategy.ec.europa.eu\/en\/library\/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment)",
"section_ref":"Mark J. Van der Laan and Sherri Rose (2018). Targeted Learning in Data Science. Cham: Springer International Publishing, 2018.\n\nAlice Zheng. 2015. Evaluating Machine Learning Models (2015). O'Reilly. [URL](https:\/\/www.oreilly.com\/library\/view\/evaluating-machine-learning\/9781492048756\/)\n\nBrenda Leong and Patrick Hall (2021). 5 things lawyers should know about artificial intelligence. ABA Journal. [URL](https:\/\/www.abajournal.com\/columns\/article\/5-things-lawyers-should-know-about-artificial-intelligence)\n\nUK Centre for Data Ethics and Innovation, \u201cThe roadmap to an effective AI assurance ecosystem\u201d. [URL](https:\/\/assets.publishing.service.gov.uk\/government\/uploads\/system\/uploads\/attachment_data\/file\/1039146\/The_roadmap_to_an_effective_AI_assurance_ecosystem.pdf)",
"AI Actors":[
"AI Design",
"AI Development",
"Human Factors"
],
"Topic":[
"Context of Use",
"Documentation"
]
},
{
"type":"Map",
"title":"MAP 3.4",
"category":"MAP-3",
"description":"Processes for operator and practitioner proficiency with AI system performance and trustworthiness \u2013 and relevant technical standards and certifications \u2013 are defined, assessed and documented.",
"section_about":"Human-AI configurations can span from fully autonomous to fully manual. AI systems can autonomously make decisions, defer decision-making to a human expert, or be used by a human decision-maker as an additional opinion. In some scenarios, professionals with expertise in a specific domain work in conjunction with an AI system towards a specific end goal\u2014for example, a decision about another individual(s). Depending on the purpose of the system, the expert may interact with the AI system but is rarely part of the design or development of the system itself. These experts are not necessarily familiar with machine learning, data science, computer science, or other fields traditionally associated with AI design or development and - depending on the application - will likely not require such familiarity. For example, for AI systems that are deployed in health care delivery the experts are the physicians and bring their expertise about medicine\u2014not data science, data modeling and engineering, or other computational factors. The challenge in these settings is not educating the end user about AI system capabilities, but rather leveraging, and not replacing, practitioner domain expertise.\n\nQuestions remain about how to configure humans and automation for managing AI risks. Risk management is enhanced when organizations that design, develop or deploy AI systems for use by professional operators and practitioners:\n\n- are aware of these knowledge limitations and strive to identify risks in human-AI interactions and configurations across all contexts, and the potential resulting impacts, \n- define and differentiate the various human roles and responsibilities when using or interacting with AI systems, and\n- determine proficiency standards for AI system operation in proposed context of use, as enumerated in MAP-1 and established in GOVERN-3.2.",
"section_actions":"- Identify and declare AI system features and capabilities that may affect downstream AI actors\u2019 decision-making in deployment and operational settings for example how system features and capabilities may activate known risks in various human-AI configurations, such as selective adherence. \n- Identify skills and proficiency requirements for operators, practitioners and other domain experts that interact with AI systems,Develop AI system operational documentation for AI actors in deployed and operational environments, including information about known risks, mitigation criteria, and trustworthy characteristics enumerated in Map-1. \n- Define and develop training materials for proposed end users, practitioners and operators about AI system use and known limitations. \n- Define and develop certification procedures for operating AI systems within defined contexts of use, and information about what exceeds operational boundaries. \n- Include operators, practitioners and end users in AI system prototyping and testing activities to help inform operational boundaries and acceptable performance. Conduct testing activities under scenarios similar to deployment conditions. \n- Verify model output provided to AI system operators, practitioners and end users is interactive, and specified to context and user requirements defined in MAP-1.\n- Verify AI system output is interpretable and unambiguous for downstream decision making tasks. \n- Design AI system explanation complexity to match the level of problem and context complexity.\n- Verify that design principles are in place for safe operation by AI actors in decision-making environments.\n- Develop approaches to track human-AI configurations, operator, and practitioner outcomes for integration into continual improvement.",
"section_doc":"### Organizations can document the following\n- What policies has the entity developed to ensure the use of the AI system is consistent with its stated values and principles?\n- How will the accountable human(s) address changes in accuracy and precision due to either an adversary\u2019s attempts to disrupt the AI or unrelated changes in operational\/business environment, which may impact the accuracy of the AI?\n- How does the entity assess whether personnel have the necessary skills, training, resources, and domain knowledge to fulfill their assigned responsibilities? \n- Are the relevant staff dealing with AI systems properly trained to interpret AI model output and decisions as well as to detect and manage bias in data?\n- What metrics has the entity developed to measure performance of various components?\n\n### AI Transparency Resources\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- WEF Companion to the Model AI Governance Framework- 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGIsago.pdf)",
"section_ref":"National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming:\nState-of-the-Art and Research Needs. Washington, DC: The National Academies Press. [URL](https:\/\/doi.org\/10.17226\/26355)\n\nHuman Readiness Level Scale in the System Development Process, American National Standards Institute and Human Factors and Ergonomics Society, ANSI\/HFES 400-2021.\n\nHuman-Machine Teaming Systems Engineering Guide. P McDermott, C Dominguez, N Kasdaglis, M Ryan, I Trahan, A Nelson. MITRE Corporation, 2018.\n\nSaar Alon-Barkat, Madalina Busuioc, Human\u2013AI Interactions in Public Sector Decision Making: \u201cAutomation Bias\u201d and \u201cSelective Adherence\u201d to Algorithmic Advice, Journal of Public Administration Research and Theory, 2022;, muac007. [URL](https:\/\/doi.org\/10.1093\/jopart\/muac007)\n\nBreana M. Carter-Browne, Susannah B. F. Paletz, Susan G. Campbell , Melissa J. Carraway, Sarah H. Vahlkamp, Jana Schwartz , Polly O\u2019Rourke, \u201cThere is No \u201cAI\u201d in Teams: A Multidisciplinary Framework for AIs to Work in Human Teams; Applied Research Laboratory for Intelligence and Security (ARLIS) Report, June 2021. [URL](https:\/\/www.arlis.umd.edu\/sites\/default\/files\/2022-03\/No_AI_In_Teams_FinalReport%20(1).pdf)\n\nR Crootof, ME Kaminski, and WN Price II. Humans in the Loop (March 25, 2022). Vanderbilt Law Review, Forthcoming 2023, U of Colorado Law Legal Studies Research Paper No. 22-10, U of Michigan Public Law Research Paper No. 22-011. [URL](https:\/\/ssrn.com\/abstract=4066781 or http:\/\/dx.doi.org\/10.2139\/ssrn.4066781)\n\nS Mo Jones-Jang, Yong Jin Park, How do people react to AI failure? Automation bias, algorithmic aversion, and perceived controllability, Journal of Computer-Mediated Communication, Volume 28, Issue 1, January 2023, zmac029. [URL](https:\/\/doi.org\/10.1093\/jcmc\/zmac029)\n\nA Knack, R Carter and A Babuta, \"Human-Machine Teaming in Intelligence Analysis: Requirements for developing trust in machine learning systems,\" CETaS Research Reports (December 2022). [URL](https:\/\/cetas.turing.ac.uk\/sites\/default\/files\/2022-12\/cetas_research_report_-_hmt_and_intelligence_analysis_vfinal.pdf)\n\nSD Ramchurn, S Stein , NR Jennings. Trustworthy human-AI partnerships. iScience. 2021;24(8):102891. Published 2021 Jul 24. doi:10.1016\/j.isci.2021.102891. [URL](https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC8365362\/pdf\/main.pdf)\n\nM. Veale, M. Van Kleek, and R. Binns, \u201cFairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making,\u201d in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI \u201918. Montreal QC, Canada: ACM Press, 2018, pp. 1\u201314. [URL](http:\/\/dl.acm.org\/citation.cfm?doid=3173574.3174014)",
"AI Actors":[
"AI Design",
"AI Development",
"Human Factors",
"End-Users",
"Domain Experts",
"Operation and Monitoring"
],
"Topic":[
"Human-AI teaming"
]
},
{
"type":"Map",
"title":"MAP 3.5",
"category":"MAP-3",
"description":"Processes for human oversight are defined, assessed, and documented in accordance with organizational policies from GOVERN function.",
"section_about":"As AI systems have evolved in accuracy and precision, computational systems have moved from being used purely for decision support\u2014or for explicit use by and under the\ncontrol of a human operator\u2014to automated decision making with limited input from humans. Computational decision support systems augment another, typically human, system in making decisions.These types of configurations increase the likelihood of outputs being produced with little human involvement. \n\nDefining and differentiating various human roles and responsibilities for AI systems\u2019 governance, and differentiating AI system overseers and those using or interacting with AI systems can enhance AI risk management activities. \n\nIn critical systems, high-stakes settings, and systems deemed high-risk it is of vital importance to evaluate risks and effectiveness of oversight procedures before an AI system is deployed.\n\nUltimately, AI system oversight is a shared responsibility, and attempts to properly authorize or govern oversight practices will not be effective without organizational buy-in and accountability mechanisms, for example those suggested in the GOVERN function.",
"section_actions":"- Identify and document AI systems\u2019 features and capabilities that require human oversight, in relation to operational and societal contexts, trustworthy characteristics, and risks identified in MAP-1. \n- Establish practices for AI systems\u2019 oversight in accordance with policies developed in GOVERN-1. \n- Define and develop training materials for relevant AI Actors about AI system performance, context of use, known limitations and negative impacts, and suggested warning labels.\n- Include relevant AI Actors in AI system prototyping and testing activities. Conduct testing activities under scenarios similar to deployment conditions. \n- Evaluate AI system oversight practices for validity and reliability. When oversight practices undergo extensive updates or adaptations, retest, evaluate results, and course correct as necessary.\n- Verify that model documents contain interpretable descriptions of system mechanisms, enabling oversight personnel to make informed, risk-based decisions about system risks.",
"section_doc":"### Organizations can document the following\n- What are the roles, responsibilities, and delegation of authorities of personnel involved in the design, development, deployment, assessment and monitoring of the AI system?\n- How does the entity assess whether personnel have the necessary skills, training, resources, and domain knowledge to fulfill their assigned responsibilities? \n- Are the relevant staff dealing with AI systems properly trained to interpret AI model output and decisions as well as to detect and manage bias in data?\n- To what extent has the entity documented the AI system\u2019s development, testing methodology, metrics, and performance outcomes?\n\n### AI Transparency Resources\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)",
"section_ref":"Ben Green, \u201cThe Flaws of Policies Requiring Human Oversight of Government Algorithms,\u201d SSRN Journal, 2021. [URL](https:\/\/www.ssrn.com\/abstract=3921216)\n\nLuciano Cavalcante Siebert, Maria Luce Lupetti, Evgeni Aizenberg, Niek Beckers, Arkady Zgonnikov, Herman Veluwenkamp, David Abbink, Elisa Giaccardi, Geert-Jan Houben, Catholijn Jonker, Jeroen van den Hoven, Deborah Forster, & Reginald Lagendijk (2021). Meaningful human control: actionable properties for AI system development. AI and Ethics. [URL](https:\/\/link.springer.com\/article\/10.1007\/s43681-022-00167-3)\n\nMary Cummings, (2014). Automation and Accountability in Decision Support System Interface Design. The Journal of Technology Studies. 32. 10.21061\/jots.v32i1.a.4. [URL](https:\/\/scholar.lib.vt.edu\/ejournals\/JOTS\/v32\/v32n1\/pdf\/cummings.pdf)\n\nMadeleine Elish, M. (2016). Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction (WeRobot 2016). SSRN Electronic Journal. 10.2139\/ssrn.2757236. [URL](https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=2757236)\n\nR Crootof, ME Kaminski, and WN Price II. Humans in the Loop (March 25, 2022). Vanderbilt Law Review, Forthcoming 2023, U of Colorado Law Legal Studies Research Paper No. 22-10, U of Michigan Public Law Research Paper No. 22-011. [LINK](https:\/\/ssrn.com\/abstract=4066781), [URL](http:\/\/dx.doi.org\/10.2139\/ssrn.4066781)\n\nBogdana Rakova, Jingying Yang, Henriette Cramer, & Rumman Chowdhury (2020). Where Responsible AI meets Reality. Proceedings of the ACM on Human-Computer Interaction, 5, 1 - 23. [URL](https:\/\/arxiv.org\/pdf\/2006.12358.pdf)",
"AI Actors":[
"Human Factors",
"End-Users",
"Domain Experts",
"Operation and Monitoring",
"AI Design"
],
"Topic":[
"Human oversight"
]
},
{
"type":"Map",
"title":"MAP 4.1",
"category":"MAP-4",
"description":"Approaches for mapping AI technology and legal risks of its components \u2013 including the use of third-party data or software \u2013 are in place, followed, and documented, as are risks of infringement of a third-party\u2019s intellectual property or other rights.",
"section_about":"Technologies and personnel from third-parties are another potential sources of risk to consider during AI risk management activities. Such risks may be difficult to map since risk priorities or tolerances may not be the same as the deployer organization.\n\nFor example, the use of pre-trained models, which tend to rely on large uncurated dataset or often have undisclosed origins, has raised concerns about privacy, bias, and unanticipated effects along with possible introduction of increased levels of statistical uncertainty, difficulty with reproducibility, and issues with scientific validity.",
"section_actions":"- Review audit reports, testing results, product roadmaps, warranties, terms of service, end user license agreements, contracts, and other documentation related to third-party entities to assist in value assessment and risk management activities.\n- Review third-party software release schedules and software change management plans (hotfixes, patches, updates, forward- and backward- compatibility guarantees) for irregularities that may contribute to AI system risks.\n- Inventory third-party material (hardware, open-source software, foundation models, open source data, proprietary software, proprietary data, etc.) required for system implementation and maintenance.\n- Review redundancies related to third-party technology and personnel to assess potential risks due to lack of adequate support.",
"section_doc":"### Organizations can document the following\n- Did you establish a process for third parties (e.g. suppliers, end users, subjects, distributors\/vendors or workers) to report potential vulnerabilities, risks or biases in the AI system?\n- If your organization obtained datasets from a third party, did your organization assess and manage the risks of using such datasets?\n- How will the results be independently verified?\n\n### AI Transparency Resources\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- Intel.gov: AI Ethics Framework for Intelligence Community - 2020. [URL](https:\/\/www.intelligence.gov\/artificial-intelligence-ethics-framework-for-the-intelligence-community)\n- WEF Model AI Governance Framework Assessment 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGModelAIGovFramework2.pdf)",
"section_ref":"### Language models\n\nEmily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? \ud83e\udd9c. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21). Association for Computing Machinery, New York, NY, USA, 610\u2013623. [URL](https:\/\/doi.org\/10.1145\/3442188.3445922)\n\nJulia Kreutzer, Isaac Caswell, Lisa Wang, et al. 2022. Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets. Transactions of the Association for Computational Linguistics 10 (2022), 50\u201372. [URL](https:\/\/doi.org\/10.1162\/tacl_a_00447)\n\nLaura Weidinger, Jonathan Uesato, Maribeth Rauh, et al. 2022. Taxonomy of Risks posed by Language Models. In 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT '22). Association for Computing Machinery, New York, NY, USA, 214\u2013229. [URL](https:\/\/doi.org\/10.1145\/3531146.3533088)\n\nOffice of the Comptroller of the Currency. 2021. Comptroller's Handbook: Model Risk Management, Version 1.0, August 2021. [URL](https:\/\/www.occ.gov\/publications-and-resources\/publications\/comptrollers-handbook\/files\/model-risk-management\/index-model-risk-management.html)\n\nRishi Bommasani, Drew A. Hudson, Ehsan Adeli, et al. 2021. On the Opportunities and Risks of Foundation Models. arXiv:2108.07258. [URL](https:\/\/arxiv.org\/abs\/2108.07258)\n\nJason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, William Fedus. \u201cEmergent Abilities of Large Language Models.\u201d ArXiv abs\/2206.07682 (2022). [URL](https:\/\/arxiv.org\/pdf\/2206.07682.pdf)",
"AI Actors":[
"Third-party entities",
"Procurement",
"Operation and Monitoring",
"Governance and Oversight"
],
"Topic":[
"Legal and Regulatory",
"Third-party",
"Pre-trained models",
"Supply Chain",
"Risk Tolerance",
"Risky Emergent Behavior"
]
},
{
"type":"Map",
"title":"MAP 4.2",
"category":"MAP-4",
"description":"Internal risk controls for components of the AI system including third-party AI technologies are identified and documented.",
"section_about":"In the course of their work, AI actors often utilize open-source, or otherwise freely available, third-party technologies \u2013 some of which may have privacy, bias, and security risks. Organizations may consider internal risk controls for these technology sources and build up practices for evaluating third-party material prior to deployment.",
"section_actions":"- Track third-parties preventing or hampering risk-mapping as indications of increased risk. \n- Supply resources such as model documentation templates and software safelists to assist in third-party technology inventory and approval activities.\n- Review third-party material (including data and models) for risks related to bias, data privacy, and security vulnerabilities.\n- Apply traditional technology risk controls \u2013 such as procurement, security, and data privacy controls \u2013 to all acquired third-party technologies.",
"section_doc":"### Organizations can document the following\n- Can the AI system be audited by independent third parties?\n- To what extent do these policies foster public trust and confidence in the use of the AI system?\n- Are mechanisms established to facilitate the AI system\u2019s auditability (e.g. traceability of the development process, the sourcing of training data and the logging of the AI system\u2019s processes, outcomes, positive and negative impact)?\n\n### AI Transparency Resources\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- Intel.gov: AI Ethics Framework for Intelligence Community - 2020. [URL](https:\/\/www.intelligence.gov\/artificial-intelligence-ethics-framework-for-the-intelligence-community)\n- WEF Model AI Governance Framework Assessment 2020. [URL](https:\/\/www.pdpc.gov.sg\/-\/media\/Files\/PDPC\/PDF-Files\/Resource-for-Organisation\/AI\/SGModelAIGovFramework2.pdf)\n- Assessment List for Trustworthy AI (ALTAI) - The High-Level Expert Group on AI - 2019. [LINK](https:\/\/altai.insight-centre.org\/), [URL](https:\/\/digital-strategy.ec.europa.eu\/en\/library\/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment).",
"section_ref":"Office of the Comptroller of the Currency. 2021. Comptroller's Handbook: Model Risk Management, Version 1.0, August 2021. Retrieved on July 7, 2022. [URL](https:\/\/www.occ.gov\/publications-and-resources\/publications\/comptrollers-handbook\/files\/model-risk-management\/index-model-risk-management.html)\n\nProposed Interagency Guidance on Third-Party Relationships: Risk Management, 2021. [URL](https:\/\/www.occ.gov\/news-issuances\/news-releases\/2021\/nr-occ-2021-74a.pdf)\n\nKang, D., Raghavan, D., Bailis, P.D., & Zaharia, M.A. (2020). Model Assertions for Monitoring and Improving ML Models. ArXiv, abs\/2003.01668. [URL](https:\/\/proceedings.mlsys.org\/paper\/2020\/file\/a2557a7b2e94197ff767970b67041697-Paper.pdf)",
"AI Actors":[
"AI Deployment",
"TEVV",
"Operation and Monitoring",
"Third-party entities"
],
"Topic":[
"Third-party",
"Pre-trained models"
]
},
{
"type":"Map",
"title":"MAP 5.1",
"category":"MAP-5",
"description":"Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past uses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed the AI system, or other data are identified and documented.",
"section_about":"AI actors can evaluate, document and triage the likelihood of AI system impacts identified in Map 5.1 Likelihood estimates may then be assessed and judged for go\/no-go decisions about deploying an AI system. If an organization decides to proceed with deploying the system, the likelihood and magnitude estimates can be used to assign TEVV resources appropriate for the risk level.",
"section_actions":"- Establish assessment scales for measuring AI systems\u2019 impact. Scales may be qualitative, such as red-amber-green (RAG), or may entail simulations or econometric approaches. Document and apply scales uniformly across the organization\u2019s AI portfolio.\n- Apply TEVV regularly at key stages in the AI lifecycle, connected to system impacts and frequency of system updates.\n- Identify and document likelihood and magnitude of system benefits and negative impacts in relation to trustworthiness characteristics.",
"section_doc":"### Organizations can document the following\n- Which population(s) does the AI system impact?\n- What assessments has the entity conducted on trustworthiness characteristics for example data security and privacy impacts associated with the AI system?\n- Can the AI system be tested by independent third parties?\n\n### AI Transparency Resources\n- Datasheets for Datasets. [URL](http:\/\/arxiv.org\/abs\/1803.09010)\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- AI policies and initiatives, in Artificial Intelligence in Society, OECD, 2019. [URL](https:\/\/www.oecd.org\/publications\/artificial-intelligence-in-society-eedfee77-en.htm)\n- Intel.gov: AI Ethics Framework for Intelligence Community - 2020. [URL](https:\/\/www.intelligence.gov\/artificial-intelligence-ethics-framework-for-the-intelligence-community)\n- Assessment List for Trustworthy AI (ALTAI) - The High-Level Expert Group on AI - 2019. [LINK](https:\/\/altai.insight-centre.org\/), [URL](https:\/\/digital-strategy.ec.europa.eu\/en\/library\/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment)",
"section_ref":"Emilio G\u00f3mez-Gonz\u00e1lez and Emilia G\u00f3mez. 2020. Artificial intelligence in medicine and healthcare. Joint Research Centre (European Commission). [URL](https:\/\/op.europa.eu\/en\/publication-detail\/-\/publication\/b4b5db47-94c0-11ea-aac4-01aa75ed71a1\/language-en)\n\nArtificial Intelligence Incident Database. 2022. [URL](https:\/\/incidentdatabase.ai\/?lang=en)\n\nAnthony M. Barrett, Dan Hendrycks, Jessica Newman and Brandie Nonnecke. \u201cActionable Guidance for High-Consequence AI Risk Management: Towards Standards Addressing AI Catastrophic Risks\". ArXiv abs\/2206.08966 (2022) [URL](https:\/\/arxiv.org\/abs\/2206.08966)\n\nGanguli, D., et al. (2022). Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned. arXiv. https:\/\/arxiv.org\/abs\/2209.07858",
"AI Actors":[
"AI Design",
"AI Development",
"AI Deployment",
"AI Impact Assessment",
"Operation and Monitoring",
"Affected Individuals and Communities",
"End-Users"
],
"Topic":[
"Participation",
"Impact Assessment"
]
},
{
"type":"Map",
"title":"MAP 5.2",
"category":"MAP-5",
"description":"Practices and personnel for supporting regular engagement with relevant AI actors and integrating feedback about positive, negative, and unanticipated impacts are in place and documented.",
"section_about":"AI systems are socio-technical in nature and can have positive, neutral, or negative implications that extend beyond their stated purpose. Negative impacts can be wide- ranging and affect individuals, groups, communities, organizations, and society, as well as the environment and national security.\n\nOrganizations can create a baseline for system monitoring to increase opportunities for detecting emergent risks. After an AI system is deployed, engaging different stakeholder groups \u2013 who may be aware of, or experience, benefits or negative impacts that are unknown to AI actors involved in the design, development and deployment activities \u2013 allows organizations to understand and monitor system benefits and potential negative impacts more readily.",
"section_actions":"- Establish and document stakeholder engagement processes at the earliest stages of system formulation to identify potential impacts from the AI system on individuals, groups, communities, organizations, and society.\n- Employ methods such as value sensitive design (VSD) to identify misalignments between organizational and societal values, and system implementation and impact.\n- Identify approaches to engage, capture, and incorporate input from system end users and other key stakeholders to assist with continuous monitoring for potential impacts and emergent risks.\n- Incorporate quantitative, qualitative, and mixed methods in the assessment and documentation of potential impacts to individuals, groups, communities, organizations, and society.\n- Identify a team (internal or external) that is independent of AI design and development functions to assess AI system benefits, positive and negative impacts and their likelihood and magnitude.\n- Evaluate and document stakeholder feedback to assess potential impacts for actionable insights regarding trustworthiness characteristics and changes in design approaches and principles.\n- Develop TEVV procedures that incorporate socio-technical elements and methods and plan to normalize across organizational culture. Regularly review and refine TEVV processes.",
"section_doc":"### Organizations can document the following\n- If the AI system relates to people, does it unfairly advantage or disadvantage a particular social group? In what ways? How was this managed?\n- If the AI system relates to other ethically protected groups, have appropriate obligations been met? (e.g., medical data might include information collected from animals)\n- If the AI system relates to people, could this dataset expose people to harm or legal action? (e.g., financial social or otherwise) What was done to mitigate or reduce the potential for harm?\n\n### AI Transparency Resources\n- Datasheets for Datasets. [URL](http:\/\/arxiv.org\/abs\/1803.09010)\n- GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities. [URL](https:\/\/www.gao.gov\/products\/gao-21-519sp)\n- AI policies and initiatives, in Artificial Intelligence in Society, OECD, 2019. [URL](https:\/\/www.oecd.org\/publications\/artificial-intelligence-in-society-eedfee77-en.htm)\n- Intel.gov: AI Ethics Framework for Intelligence Community - 2020. [URL](https:\/\/www.intelligence.gov\/artificial-intelligence-ethics-framework-for-the-intelligence-community)\n- Assessment List for Trustworthy AI (ALTAI) - The High-Level Expert Group on AI - 2019. [LINK](https:\/\/altai.insight-centre.org\/), [URL](https:\/\/digital-strategy.ec.europa.eu\/en\/library\/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment)",
"section_ref":"Susanne Vernim, Harald Bauer, Erwin Rauch, et al. 2022. A value sensitive design approach for designing AI-based worker assistance systems in manufacturing. Procedia Comput. Sci. 200, C (2022), 505\u2013516. [URL](https:\/\/doi.org\/10.1016\/j.procs.2022.01.248)\n\nHarini Suresh and John V. Guttag. 2020. A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. arXiv:1901.10002. Retrieved from [URL](https:\/\/arxiv.org\/abs\/1901.10002)\n\nMargarita Boyarskaya, Alexandra Olteanu, and Kate Crawford. 2020. Overcoming Failures of Imagination in AI Infused System Development and Deployment. arXiv:2011.13416. [URL](https:\/\/arxiv.org\/abs\/2011.13416)\n\nKonstantinia Charitoudi and Andrew Blyth. A Socio-Technical Approach to Cyber Risk Management and Impact Assessment. Journal of Information Security 4, 1 (2013), 33-41. [URL](http:\/\/dx.doi.org\/10.4236\/jis.2013.41005)\n\nRaji, I.D., Smart, A., White, R.N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency.\n\nEmanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, Madeleine Clare Elish, & Jacob Metcalf. 2021. Assemlbing Accountability: Algorithmic Impact Assessment for the Public Interest. Data & Society. Accessed 7\/14\/2022 at [URL](https:\/\/datasociety.net\/library\/assembling-accountability-algorithmic-impact-assessment-for-the-public-interest\/)\n\nShari Trewin (2018). AI Fairness for People with Disabilities: Point of View. ArXiv, abs\/1811.10670. [URL](https:\/\/arxiv.org\/pdf\/1811.10670.pdf)\n\nAda Lovelace Institute. 2022. Algorithmic Impact Assessment: A Case Study in Healthcare. Accessed July 14, 2022. [URL](https:\/\/www.adalovelaceinstitute.org\/report\/algorithmic-impact-assessment-case-study-healthcare\/)\n\nMicrosoft Responsible AI Impact Assessment Template. 2022. Accessed July 14, 2022. [URL](https:\/\/blogs.microsoft.com\/wp-content\/uploads\/prod\/sites\/5\/2022\/06\/Microsoft-RAI-Impact-Assessment-Template.pdf)\n\nMicrosoft Responsible AI Impact Assessment Guide. 2022. Accessed July 14, 2022. [URL](https:\/\/blogs.microsoft.com\/wp-content\/uploads\/prod\/sites\/5\/2022\/06\/Microsoft-RAI-Impact-Assessment-Guide.pdf)\n\nMicrosoft Responsible AI Standard, v2. [URL](https:\/\/query.prod.cms.rt.microsoft.com\/cms\/api\/am\/binary\/RE4ZPmV)\n\nMicrosoft Research AI Fairness Checklist. [URL](https:\/\/www.microsoft.com\/en-us\/research\/project\/ai-fairness-checklist\/)\n\nPEAT AI & Disability Inclusion Toolkit \u2013 Risks of Bias and Discrimination in AI Hiring Tools. [URL](https:\/\/www.peatworks.org\/ai-disability-inclusion-toolkit\/risks-of-bias-and-discrimination-in-ai-hiring-tools\/)",
"AI Actors":[
"AI Design",
"Human Factors",
"AI Deployment",
"AI Impact Assessment",
"Operation and Monitoring",
"Domain Experts",
"Affected Individuals and Communities",
"End-Users"
],
"Topic":[
"Participation",
"Impact Assessment"
]