To what extent has the advancement of artificial intelligence and cybersecurity influenced the techniques used by the United States in nuclear deterrence?

To what extent has the advancement of artificial intelligence and cybersecurity influenced the techniques used by the United States in nuclear deterrence, and how do these transformations correspond or diverge from the theoretical frameworks proposed by Herman Kahn and Thomas Schelling? This inquiry facilitates a thorough examination of the convergence of technology, namely artificial intelligence (A.I.) and cybersecurity, with nuclear deterrence. Additionally, it offers a historical perspective by examining the techniques proposed by Kahn and Schelling. Furthermore, it fosters an examination of United States foreign policy within the framework of these vigorous dynamics.

The fields of A.I. and cybersecurity have substantial ramifications for nuclear deterrent techniques, as suggested by Herman Kahn and Thomas Schelling, as well as for the present foreign policy of the United States. The deterrence strategy developed by Herman Kahn was founded around the principle of “escalation dominance,” which entails the pursuit of superior firepower and strategic capabilities as a means to dissuade an adversary from intensifying a war (Kahn, 1962; Kahn, 2017). The introduction of AI might have various effects on this technique . A.I. has the potential to augment a nation’s capacity to anticipate and address challenges, so possibly fortifying its escalation supremacy. Nevertheless, A.I. has the potential to facilitate cyberattacks on nuclear command and control systems, so posing a threat to a nation’s escalation superiority (Johnson, 2023). Thomas Schelling’s deterrence approach, however, was founded on the principle of “mutual assured destruction” (MAD), which aims to dissuade an adversary from launching an attack by guaranteeing that any assault would lead to the complete annihilation of all parties involved (Payne, 2008). The potential influence of AI and cybersecurity on this approach may result in increased challenges in achieving mutual annihilation. A.I. has the potential to be used in the development of missile defense systems that can intercept nuclear missiles, hence posing a threat to the notion of MAD. Moreover, cyberattacks have the ability to incapacitate a nation’s nuclear stockpile, so further eroding the notion of MAD (Freedman, 2018; Freedman, 2019) . The recognition of A.I. and cybersecurity as crucial elements of national security is progressively gaining prominence in contemporary U.S. foreign policy. The U.S. is making substantial investments in the field of A.I. research and development, while also enhancing its cybersecurity measures. The aforementioned endeavors are directed towards the dual objectives of bolstering the United States’ deterrence capabilities and safeguarding against prospective adversarial threats (Hynek, 2022; Johnson, 2023). The convergence of AI and cybersecurity has the capacity to profoundly influence nuclear deterrent policies, as it may bolster a nation’s existing capabilities while also presenting new weaknesses. Therefore, it is essential for policymakers to comprehend these effects and devise methods to effectively mitigate them.

The integration of Artificial Intelligence, in bolstering cybersecurity measures could enhance the United States nuclear deterrent plan (Hynek, 2022; Johnson, 2023). This investigation delves into all aspects involved including the state of the U.S. Nuclear deterrence strategy the utilization of A.I. In technology and security practices and the importance of cybersecurity in protecting national interests. It also encourages a thinking approach by not examining present scenarios but also reflecting on how these areas can be improved and integrated in the future (Hynek, 2022; Johnson, 2023). The research will cover an analysis of U.S. Nuclear deterrence policies, an exploration of A.I.s role in cybersecurity an evaluation of risks to nuclear systems and how A.I. can help mitigate them an assessment of both positive and negative impacts of incorporating A.I. into nuclear deterrence strategies and the development of a framework, for seamless integration (Hynek, 2022; Johnson, 2023). The dissertation question demonstrates clarity, conciseness, complexity and argue-ability meeting all criteria.

The examination of the impact of AI, on deterrence and cybersecurity in U.S. Foreign policy involves looking back at the history of nuclear deterrence and U.S. Foreign policy and introducing the connection between artificial intelligence and cybersecurity discussing the role of AI in nuclear deterrence integrating artificial intelligence and cybersecurity into U.S. Foreign policy studying real life examples drawing conclusions and considering future implications (Hynek, 2022; Johnson, 2023). This study aims to offer an overview of deterrence importance within the realm of U.S. Foreign policy. It will delve into how nuclear deterrent strategies have evolved over time and their effects on diplomacy. Furthermore, it will introduce readers to the concepts of A.I. And cybersecurity emphasizing their significance in tactics and defense strategies. The research seeks to explore how artificial intelligence could potentially impact deterrence by improving warning systems, decision making processes and enhancing the precision of nuclear weapons. Additionally, it will analyze how A.I. and cybersecurity is being integrated into U.S. Foreign policy initiatives (Hynek, 2022; Johnson, 2023). The paper will also address both benefits and risks associated with this integration including concerns, about the vulnerability of weapon systems to cyber attacks.

This research has the potential to provide tangible illustrations of the practical use of artificial intelligence and cyber-security in deterrence. The objective is to succinctly present the results and analyze their potential implications for deterrence in U.S. policy and global relations. In order to do this, it is crucial to possess a profound comprehension of deterrence in U.S. Foreign policy, as well as artificial intelligence and cybersecurity. Additionally, the study will include examining problems and identifying correlations, spanning several domains.

Increasing the detection capabilities for the purpose of giving an early warning system and enhancing the potential of a human analyst doing cross analysis of intelligence, reconnaissance data, and surveillance are two of the benefits that will be brought about by the combination of artificial intelligence and machine learning (Hynek, 2022; Johnson, 2023). In turn, this will also aid in improving the management of human forces and the protection of the command, as well as regulating the architecture against cyberattacks, as well as improving resource methods and management of human forces. The architecture will also be strengthened against any cyber assault as a result of this. This kind of autonomous system will be used for the purpose of carrying out remote sensing operations in order to get access to human and remotely operated systems, as well as for the purpose of nuclear weaponry. On the other hand, if the artificial intelligence is not used appropriately, it has the potential to undermine the stability and may result in problems with the systems (Johnson, 2021; Johnson, 2023).

Problem Statement

Using artificial intelligence and cybersecurity for nuclear deterrence in light of current issues pertaining to U.S. foreign policy presents a number of challenges, the most significant of which is the potential for escalation risks, miscalculation and escalation, lack of understanding, speed of decision-making, vulnerability to cyber-attacks, reliability and verification issues, ethical and legal concerns, and escalation risks (Hynek, 2022; Johnson, 2023).  A.I. systems are notoriously difficult to understand and often lack openness. Because of this, it is impossible to anticipate or comprehend their activities, which may result in errors in calculating. Artificial intelligence is capable of making judgments at a rate that is far faster than what humans are capable of. This might possibly lead to a quick escalation of hostilities, leaving little time for human intervention or diplomatic solutions to be implemented  (Hynek, 2022; Johnson, 2021). The same as any other digital system, artificial intelligence systems are susceptible to cyber assaults. It is possible for an enemy to influence an artificial intelligence system in order to trigger false alerts or launches that are not allowed. It may be difficult to guarantee the dependability of artificial intelligence systems in high-pressure situations such as nuclear deterrent. It would be very difficult, if not impossible, to verify the behavior of these systems under every set of circumstances (Johnson, 2021; Johnson, 2023). The use of artificial intelligence in nuclear deterrent involves significant ethical and legal concerns. For example, in the case that artificial intelligence causes a nuclear disaster, who will be held responsible for the consequences? Because artificial intelligence has the ability to make nuclear weapons seem more controllable and accurate, it might possibly reduce the threshold for deploying nuclear weapons, which would increase the likelihood of nuclear war. When it comes to incorporating artificial intelligence and cybersecurity into nuclear deterrent tactics, these concerns underline the need of making thorough considerations and implementing regulations (Geist, 2023; Hynek, 2022).

The spread of weapons is deemed as a critical policy issue that could impact nuclear deterrence and U.S. Foreign policy especially in unstable regions or involving non state entities. Factors, like adherence to the Non Proliferation Treaty (NPT) terrorism, advancements in arsenals, strategic stability and regional conflicts play key roles in this intricate and diverse matter (United Nations, 2010) . The NPT stands as an element in endeavors to curb the proliferation of nuclear arms. However ensuring compliance poses challenges (United Nations, 2010). While certain nations like North Korea have developed weapons and exited the treaty others such as Iran face accusations of violating its terms (Freedman, 2018; Freedman, 2019). Serious concerns arise regarding the acquisition of materials by non state actors like terrorist groups through theft, from poorly secured facilities or illicit black market activities. Some individuals argue that countries, like the United States are currently modernizing their arsenals potentially sparking an arms race. The introduction of technologies such as cyber warfare and hypersonic missiles could disrupt the balance of power and weaken the effectiveness of nuclear deterrence (Freedman, 2018; Freedman, 2019). Regional conflicts, such as those in the Middle East or South Asia may escalate into confrontations if involving nations with capabilities. To address these issues a combination of measures, arms control agreements and cooperation with countries is essential. A deep understanding of relations, security dynamics and nuclear technology is crucial, for undertaking this challenging task (Freedman, 2018; Freedman, 2019).

Additionally, according to James Johnson, Nik Hynek, Dr. Edward Geist, and Seth Franzman, the application of artificial intelligence in conventional warfare and nuclear strategy presents a number of challenges (Johnson, 2021; Hynek, 2022; Johnson, 2023; Geist, 2023). These challenges include ethical and legal issues, a lack of control, security risks, an excessive dependence on technology, an arms race, unclear responsibilities, and the possibility of accidental escalation. Artificial intelligence systems are incapable of making ethical judgments because they lack human judgment. Because of this, there are questions about the legality and morality of their usage in conflict, particularly when it comes to discriminating between fighters and non-combatants or when it comes to making judgments that might potentially result in the loss of life. The behavior of artificial intelligence systems may be unpredictable, particularly when they are using machine learning to adapt to new circumstances (Johnson, 2021; Hynek, 2022; Johnson, 2023; Geist, 2023).. This unpredictability may result in an inadvertent escalation of a dispute that was just beginning. In the context of the military, artificial intelligence systems are susceptible to being hacked, controlled, or fooled, which might result in disastrous effects. An excessive dependence on artificial intelligence might result in a shortage of human skills and competence, leaving armies exposed in the event that their technology fails or is rendered ineffective. It is possible that the development of artificial intelligence for military reasons may result in an arms race, in which countries would compete to build weapons that are more powerful and have the potential to cause instability (Johnson, 2021; Hynek, 2022; Johnson, 2023; Geist, 2023). It is not apparent who would be held accountable in the event that an artificial intelligence system commits a war crime or makes a mistake. When it comes to nuclear strategy, artificial intelligence systems might potentially misread data and conduct a pre-emptive attack, which could result in an accidental nuclear war. It is possible that the details will differ depending on the different perspectives of the scholars who have been listed. These are growing problems (Johnson, 2021; Hynek, 2022; Johnson, 2023; Geist, 2023).

Integrating AI, cybersecurity, nuclear deterrent, and U.S. foreign policy

There are opinions that are equivalent and differing about the incorporation of artificial intelligence and cybersecurity into the United States’ foreign policy regarding nuclear deterrence. A comparison may be made between the viewpoint of increased security and the danger of escalation (Johnson, 2021; Hynek, 2022; Johnson, 2023). It is possible for artificial intelligence and cybersecurity to improve the safety of nuclear deterrent systems. Artificial intelligence has the potential to enhance the accuracy and speed of threat detection, while cybersecurity measures have the ability to defend these systems against efforts to hack them. Artificial intelligence and cybersecurity both open the door to new escalation threats. Artificial intelligence, for example, has the ability to misread data and set out false alarms (Johnson, 2021; Hynek, 2022; Johnson, 2023). Additionally, cyberattacks may be misconstrued as physical assaults, which may result in an escalation of the situation. Control, vulnerability, and transparency are all perspectives that are quite different from one another. There are worries about control and responsibility that arise from the possibility that artificial intelligence systems, particularly those with autonomous capabilities, might possibly make choices without the assistance of humans. Control of cybersecurity measures, on the other hand, is normally exercised by professionals (Johnson, 2021; Hynek, 2022; Johnson, 2023). In spite of the fact that artificial intelligence has the potential to be used to improve the safety of nuclear deterrent systems, it is also capable of being utilized by enemies to weaken these systems. On the other hand, the primary focus of cybersecurity is to safeguard systems from vulnerabilities of this kind. Because of their complexity, artificial intelligence systems sometimes lack transparency, which might eventually result in distrust and misunderstanding (Johnson, 2021; Hynek, 2022; Johnson, 2023). Effective cybersecurity measures, on the other hand, need a certain degree of openness in order to guarantee conformity with accepted international standards and agreements.  A.I. and cybersecurity both have the potential to improve and complicate the United States’ foreign policy in relation to nuclear deterrence. However, they do so in distinct ways and bring their own set of issues (Johnson, 2021; Hynek, 2022; Johnson, 2023).

Need for more A.I., cybersecurity, nuclear deterrence, and foreign policy study

Various national security specialists, have all underlined the significance of doing more study in A.I., cybersecurity, nuclear deterrence, and the present foreign policy of the United States. There are various reasons for this. The development of artificial intelligence is accelerating at a fast pace and has the potential to totally transform military operations and national security  (Johnson, 2021; Hynek, 2022; Johnson, 2023). On the other hand, technology also brings up new difficulties and dangers. For example, attackers might utilize artificial intelligence to conduct sophisticated cyberattacks or to construct autonomous weapons systems. There are many other applications for AI. For this reason, further study is required to have a better understanding of these dangers and to devise measures to reduce their impact. A rising risk for the nation’s security is the proliferation of cyber threats. Through the employment of cyberattacks, both state and non-state actors have the ability to destroy key infrastructure, steal sensitive information, and even influence elections. In order to better comprehend these dangers and to devise protections that are effective, further study is required (Johnson, 2021; Hynek, 2022; Johnson, 2023). Despite the fact that the fundamentals of nuclear deterrence have been thoroughly shown, the environment is undergoing transformation as a result of the advent of new technologies and players. It is necessary to do more study in order to have a better understanding of how these fluctuations influence the dynamics of nuclear deterrence and to devise new tactics for preserving stability. The United States of America is confronted with a multifaceted and volatile foreign environment (Johnson, 2021; Hynek, 2022; Johnson, 2023). To get a better understanding of these shifts and to provide direction for the formulation of successful strategies for foreign policy, further study is required. Taking into consideration the most current and relevant information, it is essential to keep in mind that the particulars might change based on the precise region of concentration.

The main A.I., cybersecurity, nuclear deterrence, and foreign policy agreements

In the realm of security many experts view artificial intelligence as a game changer. It is believed to have the potential to enhance efficiency in data analysis, surveillance and decision making processes (Johnson, 2021; Hynek, 2022; Johnson, 2023). However there are also concerns that AI systems could be misused for purposes posing challenges in safeguarding against cyber threats. Ensuring cybersecurity measures through efforts, between the public and private sectors is deemed crucial to protect critical infrastructure and sensitive data (Johnson, 2021; Hynek, 2022; Johnson, 2023). The consensus remains strong on the importance of deterrence in national security strategies although advancements in technology and new actors are reshaping the dynamics of deterrence. The specific regulations and locations under scrutiny can significantly influence perceptions on this matter. Moreover experts emphasize the need for foreign policy to be adaptable, proactive and responsive, to the evolving landscape filled with both opportunities and threats (Johnson, 2021; Hynek, 2022; Johnson, 2023).

Current research deficiencies and short falls

In the fields of A.I., cybersecurity, nuclear deterrent, and foreign policy, the opinions of specialists are varied and complicated (Boden, 2018). National security experts argues for incorporating artificial intelligence into the process of policymaking, but they claim that artificial intelligence and cybersecurity research is lacking since it often fails to take into account the larger strategic context . Some national security experts concentrate on the ethical and legal repercussions and judgments regarding national security (Boden, 2018). They argue that artificial intelligence and cybersecurity research is lacking since it does not effectively address the problems that have been identified. Some nuclear deterrence experts argue that the existing A.I. and cybersecurity studies are inadequate because it does not completely comprehend the difficulties of deterrence in the contemporary world, especially in light of the prevalence of emerging technology such as artificial intelligence (Payne, 2008).. Some experts argue that the existing foreign policy is flawed due to the fact that it often lacks a particular strategic vision, and that research on artificial intelligence and cybersecurity need to be more directly connected to strategic goals. For the most part, AI, national security, and nuclear deterrence specialists claim that the research that is now being conducted on artificial intelligence, cybersecurity, and nuclear deterrence is too specialized and technical, and that it does not adequately take into account the wider strategic, ethical, and legal consequences. In addition to this, they claim that the present foreign policy is too reactive and does not have a distinct strategic vision (George & Rishikof, 2017; Reveron, 2018).

There are a number of national security professionals that believe that artificial intelligence has the potential to transform both military and national security. It has the potential to strengthen decision-making processes, contribute to the enhancement of military capabilities, and give new methods for both defense and offense (Johnson, 2021; Hynek, 2022; Johnson, 2023  ; Geist, 2023). On the other hand, technology also presents considerable dangers, such as the possibility of cyber assaults and autonomous weaponry. Consequently, further study is required in order to have a better understanding of these hazards and to design suitable legislation and preventative efforts. A rising risk for the nation’s security is the proliferation of cyber threats. The ability to steal sensitive information, destroy key infrastructure, and erode faith in institutions is a capability that they possess. The identification of weaknesses, the development of systems that are more secure, and the formulation of effective responses to cyber assaults may all be aided by more study (Johnson, 2021; Hynek, 2022; Johnson, 2023; Geist, 2023). Both new technological developments and changes in geopolitical power are posing a threat to the fundamental concepts of nuclear deterrence. It is possible that more study will be of use in reevaluating these principles, investigating alternate tactics, and ensuring that deterrence will continue to be successful. The United States of America is an essential player in the field of international security. In addition to having the ability to define international conventions and help to the settlement of disputes, its foreign policy may also have the ability to affect the conduct of other governments. Insights into the implications of existing policies, suggestions for ways to enhance them, and information that can be used to drive the formulation of future plans may all be gained via more study. As a conclusion, more study in these areas has the potential to contribute to a greater understanding of the possibilities and challenges that they bring, to guide policy choices, and ultimately to improve national security. (Johnson, 2021; Hynek, 2022; Johnson, 2023; Geist, 2023)

In order to address the complex and interconnected challenges of AI, cybersecurity, nuclear deterrence, and U.S. foreign policy, there is a critical need for a more comprehensive and interdisciplinary research design. Many of the techniques that are now being used do not have the requisite breadth and depth, and they do not take into account the interaction that exists between these different areas or the possibility of unanticipated effects (Johnson, 2021; Hynek, 2022; Johnson, 2023; Geist, 2023). The topic of artificial intelligence is one that is rapidly advancing and has important implications for the protection of national security. On the other hand, there is a dearth of exhaustive study that investigates the possible effects of this phenomenon, especially with regard to autonomous weapon systems and decision-making algorithms. Research into cybersecurity often fails to keep up with the ever-evolving dangers, despite the fact that the digital sphere is becoming an increasingly significant battleground (Johnson, 2021; Hynek, 2022; Johnson, 2023; Geist, 2023). It is necessary to do more in-depth study on defensive methods, possible weaknesses, and the influence that cyber warfare has on international relations. deterrent methods need to be reexamined in light of the shifting global scene and the introduction of new technologies, despite the fact that the fundamentals of nuclear deterrent have been well-established for quite some time (Johnson, 2021; Hynek, 2022; Johnson, 2023; Geist, 2023). A common criticism leveled against the United States’ foreign policy is that it is more reactive than proactive. It is necessary to do further study in order to establish policies that allow for the anticipation and resolution of new dangers, rather than only reacting to them. Every single one of these domains is intertwined. For instance, artificial intelligence and cybersecurity have immediate consequences for the deterrent of nuclear attacks and for foreign policy. Nevertheless, research often fails to take into account these relationships, which results in a fragmented view of the difficulties and difficulties. A greater amount of study that is both comprehensive and multidisciplinary, taking into consideration various fields in connection with one another rather than in isolation, is required. It is possible that this will result in more efficient tactics and policies (Johnson, 2021; Hynek, 2022; Johnson, 2023; Geist, 2023).

The intersection of U.S. foreign policy, Christian worldview, and Biblical ethics.

The integration of policy analysis with a biblical framework, specifically within the realm of Christian foreign policy and nuclear deterrence, can yield a range of effects (Anderson & Clark, 2017). These effects encompass various dimensions such as Ethical Considerations, Human Dignity, Stewardship, Eschatological Views, Peace and Reconciliation, and Just War Theory (Johnson & Patterson, 2015) .  The biblical text offers a moral and ethical framework that may serve as a guiding principle for policy-making. As an example, the biblical precept of “love your neighbor as yourself” (English Standard Version, 2007, Mark 12:31) might potentially foster the adoption of policies that value peace and diplomacy above acts of violence. Christianity places significant emphasis on the intrinsic dignity of every individual. This has the potential to impact policy by promoting the adherence to human rights and deterring the use of weapons of mass destruction that inflict widespread damage (Anderson & Clark, 2017; Woodhead, 2004). The biblical notion of stewardship may be extended to include the obligation of countries to assume responsibility for the well-being of their citizens and the global environment. This might potentially deter the use of nuclear weapons owing to its catastrophic ecological consequences. Certain Christian viewpoints, especially those with a significant emphasis on the end times, may see nuclear warfare as a possible realization of biblical predictions, therefore impacting their position on nuclear deterrence (Anderson & Clark, 2017; Woodhead, 2004). The teachings of the Bible pertaining to peace and reconciliation may potentially result in a prioritization on diplomatic approaches and conflict resolution, as opposed to a dependence on nuclear deterrence. Several Christian intellectuals have made significant contributions to the advancement of Just War Theory, which establishes standards for determining the moral permissibility of engaging in warfare. This has the potential to impact policy pertaining to nuclear deterrence. Nevertheless, it is crucial to acknowledge that Christianity encompasses a diverse array of ideas, and various people and organizations may construe and implement biblical teachings in distinct ways. Hence, the influence of a biblical viewpoint on policy analysis might significantly differ based on the particular convictions and understandings of the individuals involved (Anderson & Clark, 2017; Woodhead, 2004).

George M. Marsden

Renowned historian George M. Marsden has delved into the intersection of religion and politics, in his body of work focused on American Christianity. While he doesn’t explicitly argue for the merging of a worldview with U.S. Policy he explores how religious beliefs influence political ideologies and behaviors (Marsden, 1997) . Marsdens research suggests that these beliefs can offer a compass for both international policy decisions promoting values like justice, peace and human rights crucial in global affairs. Believers typically start by understanding their teachings through studying texts with guidance from religious figures and experts, in the field (Marsden, 1997). However applying these principles to issues poses challenges as it involves interpreting writings within modern contexts making the process complex yet vital. It may also involve finding a harmony, between principles and beliefs which can sometimes clash with each other. Engaging in activities can manifest in ways, such as voting, supporting causes or even running for a position, in government (Marsden, 1997). The goal of this effort is to shape policy in alignment with their values. By reflecting on the outcomes believers can assess the effects of their actions learn from their experiences and adapt their approach accordingly. It is important to note that Marsden also highlights the risks linked to this approach. He warns against the emergence of ” nationalism,” where religious beliefs are used to justify antagonistic policies. In his argument he promotes a stance that recognizes the role religion plays in life while upholding the separation of church and state principles(Marsden, 1997).

 References

Anderson, T. J., & Clark, W. M. (2017). Introduction to Christian worldview. Ivp Academic Press.

Boden, M. A. (2018). Artificial intelligence: A very short introduction. Oxford University Press.

Fishel, J. T. (2017). American National Security Policy: Authorities, Institutions, and Cases (1st ed.). Roman & Littlefield Publishing

Franzman, S. J. (2021). Drone wars. Post Hill Press.

Freedman, L. (2018). Nuclear deterrence. Penguin Random House UK.

Freedman, L. (2019). The evolution of nuclear strategy. Palgrave Macmillian.

Geist, E. (2023). Deterrence under uncertainty: Artificial intelligence and nuclear warfare. Oxford University Press.

George, R. Z., & Bruce, J. B. (2008). Analyzing Intelligence. Georgetown University Press.

George, R. Z., & Rishikof, H. (2017). The National Security Enterprise (2nd ed.). Georgetown University Press.

Hynek, N. (2022). Militarizing artificial intelligence: Theory, technology, and regulation. Routledge.

Johnson, J. (2023). AI and the bomb: Nuclear strategy and risk in the digital age. Oxford University Press.

Johnson, J. (2021). Artificial intelligence and the future of warfare: The USA, China, and strategic stability. Oxford University Press.

Johnson, J. T., & Patterson, E. D. (2015). Ashgate Research Companion to Military Ethics. Routledge.

Kahn, H. (2017). On escalation metaphors and scenarios. Routledge.

Kahn, H. (2007). On thermonuclear war. Transaction Publishers.

Kahn, H. (1962). Thinking about the unthinkabl

e. Horizon Press.

Marsden, G. M. (1997). The outrageous idea of Christian censorship. Oxford University Press.

Meese, M. J., Nielsen, S. C., & Sondheimer, R. M. (2018). American National Security. Johns Hopkins University Press.

Payne, K. B. (2008). The Great American gamble. National Institute Press.

United Nations. (2010). Nuclear Non-Proliferation Treaty.

Woodhead, L. (2004). Christianity: A very short introduction. Oxford University Press

Zuckerman, P., & Shook, J. R. (2017). The Oxford Handbook of Secularism. Oxford University Press