CSS Special 2023 Solved Essays | Artificial Intelligence has Overstepped its Bounds.
Iqra Ali, a Sir Syed Kazim Ali student, has attempted the CSS Special 2023 essay “Artificial Intelligence has Overstepped its Bounds.” on the given pattern, which Sir Syed Kazim Ali teaches his students. Sir Syed Kazim Ali has been Pakistan’s top English writing and CSS, PMS essay and precis coach with the highest success rate of his students. The essay is uploaded to help other competitive aspirants learn and practice essay writing techniques and patterns to qualify for the essay paper.
Although artificial intelligence (AI) has made humans fairly dependent on it, it can never overstep its bounds, being nothing more than a set of complex algorithms designed by humans to assist them in economic, political, and social sectors, ultimately depending on human decisions and intentions for its outcomes. However, man, being a biased and selfish creature, is utilizing machines for doing unethical tasks that need to be bridled to harness the maximum dividends of machine intelligence.
2- Understanding the Term Artificial Intelligence and its significance
- ✓ The simulation of human intelligence processes by machines
3- The Evidence Explaining How Artificial Intelligence has not overstepped its bounds
- ✓ AI diagnosing diseases under the supervision of human doctors for interpreting results and deciding on treatment
- Case in Point: IBM’s Watson for Oncology, while providing recommendations for cancer treatment, ultimately relying on human doctors to make the final decision
- ✓ Autonomous devices facing ethical dilemmas due to the lack of human judgment capability in them
- Case in Point: The “trolley problem” in self-driving car development that killed a pedestrian demonstrating the lag at humans’ end while programming AI to make ethical choices in uncertain situations.
- ✓ AI algorithms in stock trading and financial predictions depending on humans to make strategic decisions
- Case in Point: The “Flash Crash” of 2010, where stock prices plummeted in minutes due to high-frequency trading algorithms requiring the immediate intervention of human regulators to stabilize the market.
- ✓ The use of AI in military applications subjecting to human decisions and ethics
- Case Study: Machine gun with AI being used to kill Iran scientist Mohsen Fakhrizadeh via a remote controlled by the human intelligence of Israel
- ✓ The use of AI for over-surveillance manifesting governments’ intentions of regulation
- Case in Point: China’s facial recognition system logging nearly every single citizen, with a vast network of cameras across the country for different purposes
- ✓ AI tools being used to predict recidivism with racial bias due to installed criminal justice algorithms by biased humans
- Case Study: The COMPAS system explaining in its research, “Black defendants were twice as likely as white defendants to be misclassified as being a higher risk of violent recidivism. And white violent recidivists were 63 per cent more likely to have been misclassified as a low risk of violent recidivism, compared with black violent recidivists due to miss installed algorithms.”
- ✓ AI systems exhibiting bias and discrimination learning from historical data installed in them
- Case in Point: In 2018, Amazon shutting down an AI-powered recruiting tool because it was trained to be biased against female candidates.
4- Factors Responsible for the Limitations of AI
- ✓ Lack of Organic Reasoning
- Case in Point: According to Richard Yonck, a futurist and author, “Even if we manage to build artificial general intelligence, its reasoning can never fully align with human values. Nonhuman perspectives are only a benefit if they inform rather than impose decision making.”
- ✓ Installation of Limited Algorithms
- Case in Point: Chatgpt, the most advanced AI software, apologizing on every other question having a limited database
- ✓ Promulgation of Pre-emptive Regulation Frameworks
- Case in Point: Several organizations, like OpenAI, NIST, and the Australian government, setting regulatory frameworks for data privacy, transparency, accountability, and fairness
5- Critical Analysis – the implication if AI oversteps its bounds
Artificial Intelligence (AI) is one of the most transformative technological innovations of the 21st century. With its remarkable capacity to analyze massive amounts of data and learn from experience, AI has the potential to reshape numerous aspects of human life, pushing the boundaries beyond possibilities. However, even with such colossal advancement, the technological marvel remains inherently reliant on human oversight and guidance, making it under humankind’s bounds. In other words, it is nothing but a testament to the power of human ingenuity. As demonstrated in the responses made by ChatGPT, one of the most viral AI technologies of today’s world, “I do not have the ability to run independently. My responses are generated based on patterns and information present in the data I was trained on, but I lack consciousness, self-awareness, or the ability to function autonomously.” So, from diagnosing diseases to dealing with ethical dilemmas, automation has remained dependent on humans for input or output. Likewise, when AI makes blunders in performing tasks it is given to, real intelligence of humans has to interfere to undo the hazards. Unfortunately, humans, being biased and selfish creatures, have been utilizing machines for unethical tasks, like invasive surveillance through face-recognition tools, privacy violations and cyber-attacks, inculcation of discrimination in artificial brains, and even killing their fellow men. Although various ethical and regulatory frameworks have been made to curb the misuse of machines based on the nefarious intentions of humans, ongoing efforts are needed to harness the maximum dividends of machine intelligence.
Before delving into the practicality, the term artificial intelligence (AI) theoretically refers to the simulation of human intelligence processes by machines. In other words, it is about developing systems that include learning- the acquisition of data and rules for using the data, reasoning- the use of the rules to reach approximate or definite conclusions and problem-solving. In recent years, the field has become essential to human survival. It has gained significant importance in various fields, including healthcare, transportation and communication, and the economy, due to virtually unlimited accessibility to computing power and the decreasing costs of data storage. All in all, AI is like a kaleidoscope revealing a dazzling array of patterns and possibilities using human-like intelligence.
Artificial Intelligence has undoubtedly made significant strides, but it is imperative to acknowledge that it has not overstepped its bounds in any area. One compelling piece of evidence lies in the occurrence of AI systems diagnosing diseases. The systems, while becoming increasingly sophisticated, are employed under the supervision of human doctors who interpret the results and make the ultimate decisions regarding treatment. For instance, IBM’s Watson for Oncology provides recommendations for cancer treatment based on vast datasets, and medical knowledge always relies on human doctors to make the final decision. Thus, the relationship between humans and AI being more symbiotic than parasitic is a testament to ensuring that crucial medical decisions remain in the capable hands of healthcare professionals while harnessing the power of AI to assist in the diagnostic and treatment process.
Moving further, artificial intelligence always requires human judgment ability to encounter ethical dilemmas stemming from them. In fact, machines are prone to make blunders only when loopholes are left at human end while programming a particular technology. For example, autonomous self-driving vehicles have been responsible for killing a pedestrian on the road despite the fact it was built per traffic rules and regulations and even the vehicle following all the directions installed. However, the issue was the folly of trolleys that emerged during self-driving car development, highlighting the challenges associated with programming AI to navigate such ethical choices in real-world, uncertain scenarios. Thus, ongoing human oversight is necessary to deal with such intricacies of autonomous technologies.
Another area where Artificial Intelligence cannot function without human intervention for strategic decision-making is in the context of stock trading and financial predictions. The Flash Crash of 2010 serves as a compelling example in this regard. At the event, stock prices experienced a sudden and severe plummet within minutes, leading to market manipulation and economic turmoil. Since algorithms lacked the judgment and foresight to prevent the setback, human regulators had to swiftly step in to stabilize the market. On the other hand, with the investigation, it was made clear that, ultimately, the human mind of Navinder Singh Sarao, a London-based point-and-click trader, played an alleged role in the crash. Thus, AI, while powerful in processing vast data and executing trades, relies on human expertise to manage catastrophic disruptions.
Moreover, the use of AI in military applications, while showcasing its potential, has remained subject to human decisions and ethical considerations. A glaring example in this regard is the targeted killing of Iranian scientist Mohsen Fakhrizadeh in which a machine-gun with AI was employed, remotely controlled by human intelligence, specifically by Israel. The incident underscores that AI, even in military contexts, is ultimately directed and overseen by human operators who make decisions and assess ethical implications. Therefore, AI, along with enhancing military capabilities and precision, hinges on the ethical and legal frameworks set by human authorities.
Not only this, the use of AI for extensive surveillance raises concerns about governments’ intentions and the need for regulation. For example, China widely uses a facial recognition system, which logs data on nearly every single citizen. The country has deployed an extensive network of cameras for various purposes, from public safety to monitoring social behaviour. While AI-driven surveillance can have legitimate uses, such as crime prevention and public safety, the scale and scope of China’s surveillance system underscore the potential for abuse and privacy infringements. Likewise, many independent organizations do this for personal gain. Thus, these are humans, not the machines or systems, who are responsible for the nefarious, unethical surveillance of the masses, invading their privacy.
Likewise, AI tools used to predict recidivism in the criminal justice system often suffer from racial bias, not because technology is biased but primarily because they rely on algorithms that have been shaped by biased human input. A significant case study in this regard is the COMPAS system, which has gained notoriety for its racial disparities. Research on the COMPAS system revealed, “Black defendants were twice as likely as white defendants to be misclassified as being a higher risk of violent recidivism. And white violent recidivists were 63 percent more likely to have been misclassified as a low risk of violent recidivism, compared with black violent recidivists due to mis-installed algorithms.” The instance serves as a stark reminder that the biases inherent in AI systems can perpetuate existing racial disparities and further exacerbate issues in the criminal justice system. It underscores the need for greater transparency, oversight, and ethical considerations in the development and implementation of AI tools within the legal framework to ensure fairness and equitable outcomes for all individuals involved.
AI systems often exhibit bias and discrimination, raising concerns about their ethical implications. However, the fact repeats itself here that machines only work the way they are trained. For instance, Amazon’s AI-powered recruiting tool hired employees on the basis of gender discrimination. The system, developed to assist in the hiring process, was found to be biased against female candidates. The incident serves as a stark reminder of the limitations and pitfalls of AI when it keeps receiving relevant prompts because it inherits biases from historical data. Thus, it is necessary that unbiased and just humans must be employed to design such software on the one hand, and regular auditing must be done to filter biased prompts from the history of machines to avoid blunders. All in all, AI needs human oversight and ethical guidance to ensure that technology is harnessed responsibly and equitably.
After understanding the ultimate human responsibility of all implications of artificial intelligence, the question arises as to why AI cannot overstep its bounds. Broadly, AI’s boundaries remain intact for several reasons, one being the absence of organic reasoning. AI, as it stands, lacks the nuanced, context-sensitive reasoning that humans possess. The limitation is eloquently expressed by futurist and author Richard Yonck, “Even if we manage to build artificial general intelligence, its reasoning can never fully align with human values. Nonhuman perspectives are only a benefit if they inform rather than impose decision making.” The sentiment underscores the idea that although AI can process vast amounts of data and make predictions, it lacks the depth of human understanding, empathy, and ethical reasoning that are critical in various decision-making scenarios. The need for human oversight and the ability to interpret AI’s output within a broader ethical and social context ensure that AI, even at its most advanced, remains within defined boundaries, leaving ultimate judgment and ethical considerations to human hands.
Moreover, AI has limited algorithms. Even the most advanced AI software, such as ChatGPT, has its limitations. As an example, ChatGPT keeps apologizing for every other prompt, acknowledging its limited database when unable to provide comprehensive responses to certain questions. Thus, AI, despite its impressive capabilities, is still bound by the constraints of its programming and data availability. The limitations serve as a reminder that AI remains a tool developed and controlled by humans, and its knowledge and responses are based on existing data up to its last training point. AI’s inability to access real-time, dynamic information and its potential to provide incorrect or incomplete responses underscore the importance of human oversight and expertise. It is an essential factor in ensuring responsible and accurate use of AI and preventing it from overstepping its bounds, especially in contexts that demand precise and up-to-date information.
Another key reason why Artificial Intelligence has not overstepped its bounds is the establishment of regulations and ethical frameworks. Numerous organizations, including OpenAI, the National Institute of Standards and Technology (NIST), and the Australian government, have recognized the necessity for guiding principles in AI development and usage. These regulatory frameworks address key aspects like data privacy, transparency, accountability, and fairness, emphasizing the need to prevent AI from overreaching and infringing on individual rights or perpetuating bias. Hence, the proactive approach to AI governance underscores the importance of human oversight and ethical considerations in the continued development and deployment of AI technology, thus ensuring that it remains a valuable tool rather than a force that oversteps its limitations.
Nonetheless, if Artificial Intelligence were to overstep its bounds, it could lead to grave ethical, legal, economic, and safety consequences. Ethically, AI’s misuse could infringe upon civil rights and values, while legal issues may arise from discriminatory algorithms and privacy infringements. Economically, overreaching AI might destabilize financial markets and exacerbate job displacement. Safety concerns include physical threats in autonomous systems, while cybersecurity risks would grow. Public trust and acceptance of AI could erode, hindering its positive potential. Therefore, ensuring AI remains within regulatory and ethical frameworks is essential to unleash its benefits while averting these potential harms.
In conclusion, no matter how far AI continues to advance, it has always remained under the intelligence of human minds. Being developed as a result of humans’ inquisitiveness, it cannot overstep its bounds, assisting mankind both in both constructive and destructive intents. Therefore, rather than fearing machines, more focus must be placed on bridling human intents who program the technology. Moreover, since AI also learn from the prompts it receives – or the historical data – from time to time, auditing must be done to filter the biased approach so that a balance must be maintained between its potential and the risks associated with overstepping its bounds.
CSS Solved Past Papers’ Essays
Looking for the last ten years of CSS and PMS Solved Essays and want to know how Sir Kazim’s students write and score the highest marks in the essays’ papers? Then, click on the CSS Solved Essays to start reading them.CSS Solved Essays
CSS Special 2023 Solved Essays
Are you searching for CSS 2023 solved essays by Sir Syed Kazim Ali’s students? Click on any of the topics to start reading the solved essay
|1.||It Matters Not What Someone is Born, but What They Grow to Be|
|2.||Developing countries must be able to reap the benefits of international trade.|
|3.||Artificial intelligence has overstepped its bounds.|
|4.||No legacy is so rich as honesty.|
|5.||Social media has destroyed real life communication.|
|6.||Globalization: The end of austerity.|
|7.||Children must be taught how to think, not what to think.|
|8.||Pakistani women have the same chances as men.|
|9.||Unipolar, bipolar or multipolar: New direction of the world.|
|10.||So, surely, with hardship comes ease.|
CSS Solved General Science & Ability Past Papers
Want to read the last ten years’ General Science & Ability Solved Past Papers to learn how to attempt them and to score high? Let’s click on the link below to read them all freely. All past papers have been solved by Miss Iqra Ali & Dr Nishat Baloch, Pakistan’s top CSS GSA coach having the highest score of their students. General Science & Ability Solved Past Papers
Articles Might Interest You!
The following are some of the most important articles for CSS and PMS aspirants. Click on any to start reading.