Officers
Chair
Mr Dominique Verdejo, AI and Security, Montpellier, Occitanie, France, e-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.
Co-Chair
Dr Frédérique Segond, Director at Inria, France, e-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.
Secretary
Dr Arumugam Chamundeswari, SSN College of Engineering, Rajiv Gandhi Salai(OMR), Kalavakkam,Tamil Nadu, India, e-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.
Website
http://ai4gs.org/
Aims
The main purpose of the AISEC is to debate and demonstrate how AI can help manage all aspects of global security. We wish to collaborate with other IFIP or non IFIP WGs groups, addressing the related topics separately. It includes fostering international collaboration and bringing fresh ideas/opinions from a multidisciplinary, multilateral and multicultural group of stakeholders including AI and security experts (academia and other related) and students. We also have ambition to elaborate reasonable mechanisms for AI governance and to mitigate AI risks.
As AI Governance has become an urgent issue for Humanity, calling for responsible design, development, deployment and use of AI, including regulation and standards, security requires AI to augment the capacities of Defense and Security personnel to achieve their missions efficiently in the increasing complexity of our world.
In its constitution, UNESCO invites people to apply deep thinking about peace as a protection against war, it is along this line that we would like the global security working group to foresee threats and attached risks that endanger peace within our societies as well as between our societies and nations and promote the use of AI as a mitigation of these risks. We also foresee the need for thinking about AI vulnerabilities and potential deception that could hinder its capabilities in defense processes.
Thinking about AI and security requires symmetrically that we envisage how AI may be used as a weapon by adverse forces having similar capacities. AI for attacks including autonomous weapons, intelligent malwares, malevolent bots, must be kept in the radar of our working group to avoid surprises.
AI can be leveraged for improving global security in a variety of application domains including cyber, physical, economic and information security. This proposed working group will examine issues relating to the use of Artificial Intelligence in both offensive and defensive security and will bring together leading researchers, organizations and industry experts from around the world.
Scope
All the different facets of security share a common process of risk assessment, surveillance and response that is at the core of decision making frameworks in security management situations like Col John Boyd’s “OODA (Observe-Orient-Decide-Act) loop” or NIST cybersecurity framework “Identify, Protect, Detect, Respond, Recover”. These methods are generic enough to be used in many different domains provided that they are used by operators relying on a deep domain knowledge. Injecting AI in these frameworks as a booster of human decision seems the most ethical and the less dangerous way of using AI while maintaining the human operator at the heart of decision making.
“The shift to AI involves a measure of reliance to an intelligence of considerable analytical potential operating on a fundamentally different experiential paradigm and human operators must be involved to monitor and control AI actions”
The Age of AI, by Henry Kissinger, Eric Schmidt and Daniel Huttenlocher
As an eclectic group of research labs, individual domain experts and AI or security related enterprises, the WG12.13 main objective is to define how AI can help security, from a global and generic standpoint. It does not limit the focus on AI inferencing technologies but opens it to knowledge management, modeling and simulation and human-machine interactions, using new interface modalities like VR or AR that, combined with natural language processing, are key enablers of AI integration into human security operations. Similarly, considering security is not a state but rather a process, integrating AI in this process requires thinking in terms of continuous improvement, continuous augmentation of security operations. This means integrating AI in each and every phase of security, from the earliest phases of intelligence gathering up to threat detection and response phases.
AI technologies considered
- Learning and Intelligence
- Knowledge management, modeling and simulations
- Deep learning and machine learning
- Natural language processing
- Data mining
- Artificial vision, video analytics
- Multi-agents and autonomous systems
- Ontologies
These technologies superimpose over the following Global Security Technology Matrix to serve the various needs of distinct security processes in every domain.
Domain / Processus |
Risk assessment |
Security implementationData acquisition
|
Monitoring Surveillance Anticipation
|
Forensic Response Containment Investigation
|
Data Security |
Privacy Impact Assessment |
Contract clauses |
Data Leak Prevention |
Privacy Impact evaluation |
Economic Security |
Market ResearchMappingSegmentation |
OSINT* SOCMINT* |
Competitive IntelligenceAutomated Moderation |
Crisis Management |
Systems Security |
ISO 27005 |
ISO 27002 EDR* |
SOC* SIEM* UEBA* SOAR* CTI* |
NDR* Analysis |
Physical Security |
Site Audit |
Access Control Biometry Video surveillance Drones Intrusion detection |
C4I* Control rooms Operation centers |
Autonomous systems First responders Forensic |
|
P |
D |
C |
A |
AI for Global Security technology mapping
Members