Implementing Lethal Autonomous Weapon Systems Into Mission Command Leadership
MetadataShow full item record
- Bacheloroppgaver 
To what extent are lethal autonomous weapon systems compatible with mission command philosophy and what are the most important ethical and practical considerations when implementing them? This thesis will be centred around the leadership of artificial intelligence (AI) in a military context. Based on mission command leadership the thesis will provide perspectives on what a military leader should consider when implementing AI. The purpose of the thesis is to provide a better understanding of how to implement and interact with AI in a mission command based military hierarchy. As well as discovering the most central ethical and practical implications of doing so. Artificial intelligence (AI) is a field of research still in its infancy, even a commonly accepted definition for the term has yet to be decided. Yet experts within the field, such as Paul Scharre, Michael Horowitz and Robert Work, believe a new technological revolution on the scale of the industrial revolutions is underway, starring AI in the lead role (Horowitz, Scharre, & Work, 2018). AI is an enabling technology with a vast number of applications. Kevin Kelly has famously compared AI to electricity. Giving objects autonomy will vastly increase their efficiency, much like electrifying them did over a century ago (Kelly, 2014). Much like the industrial revolutions reshaped society and thereby the battlefield, shaping the history of warfare, the AI revolution may come to shape the future of warfare. Regardless of whether this is true, it is not a matter one can afford to overlook. Time and time again history has shown the consequences of falling behind during revolutions in warfare. German interwar capitalization on new technology such as aircraft, tanks and radio led to astonishing results during the invasion of France during WWII. The battlefield rarely grants second chances (Scharre, 2018, pp. 93-94).