When algorithms decide, what happens to rationality?

25 January,2026 08:14 AM IST |  Mumbai  |  Nishant Sahdev

AI is now being used to make decisions that shape nations, from governance to banking to healthcare and welfare. But how much can we trust algorithms that are far from flawless and yet dress up their output as such?

With AI, decisions become quicker and smoother, but also narrower. Human judgment is not removed; it is quietly reshaped around machine suggestions. Representational pics/iStock


Your browser doesn’t support HTML5 audio

India is entering a new phase of decision-making. A welfare application is flagged by a computer system. A student is ranked by software. A loan is rejected without a clear explanation. A hospital bed is assigned by an algorithm.

In many such cases, the authority behind the decision is no longer a visible person. It is a system. A score. A dashboard. These decisions look rational. They are fast, consistent, and based on data. But this raises a big question that India has not fully confronted yet: what kind of rationality are we building when machines increasingly guide public decisions?

For years, digital tools helped humans make better choices. Computers stored records, reduced paperwork, and made information easier to access. Artificial intelligence changes this role. AI systems do not just retrieve information. They produce it - by predicting outcomes, ranking people, summarising situations, and guessing intent. This is not just a technological change. It is a change in how knowledge itself is created and trusted.

Graphic/iStock

Good decision-making depends not only on results, but on how those results are reached. In science, no finding is accepted without knowing how uncertain it is. In public life, decisions are expected to be explainable and open to challenge. AI systems often struggle on both counts.

Today, AI tools are widely used in banking, government administration, education, healthcare, and logistics. According to the World Bank, more than 60 per cent of governments worldwide now use algorithm-based systems in core functions such as welfare delivery and fraud detection. India is clearly part of this trend. But understanding has not kept pace with adoption. Surveys by the OECD show that fewer than one-third of public officials using AI systems can clearly explain how those systems work. In practice, this means institutions are relying on tools they do not fully understand, while treating their outputs as authoritative.

AI systems are not uniquely prone to error. Human judgment is often flawed too. The key difference is visibility. When humans make decisions, uncertainty is usually visible. An official hesitates. A doctor explains risk. A mistake can be questioned. With AI, uncertainty is often hidden. Outputs look confident, polished, and mathematically precise - even when they are wrong.

Public institutions need in-house expertise to question and test AI systems

This matters because scale changes everything. A bad human decision affects one case.

A bad algorithmic decision can affect thousands - or millions.

Research published by institutions such as MIT and Nature Machine Intelligence shows consistent problems with algorithmic decision systems: they can amplify bias, react poorly to small changes in data, and express high confidence even when predictions are uncertain. These are not rare bugs. They are built-in risks of systems trained on past data.

Once these systems become part of public administration, their outputs often start to feel like unquestionable facts. Responsibility becomes unclear. Appeals become procedural rather than meaningful. People are told a decision followed "the system," not why it was made.

Over time, this changes how rationality itself is understood. Decisions begin to seem rational simply because they follow an approved algorithmic process, not because they are well-reasoned or morally justified. Rationality becomes about procedure rather than understanding. Following
the system replaces thinking through the problem.

India is especially vulnerable to this shift. As a large and diverse democracy, India has long relied on human discretion. This discretion was often messy and imperfect, but it was visible and open to challenge. AI introduces a new form of authority - one that appears objective because it is automated, but is difficult to question because no single person is accountable. This is not just a technology issue. It is a governance issue disguised as efficiency.

There is evidence that AI is already changing how people think at work. Studies by MIT and Stanford show that professionals using AI tools check original sources less often - by as much as 40 per cent - and reach agreement faster, even when the AI output is wrong. Decisions become quicker and smoother, but also narrower. Human judgment is not removed; it is quietly reshaped around machine suggestions. At the national level, the stakes are even higher.

India is not only adopting AI tools; it is also importing the authority behind them. According to Stanford's AI Index, more than 85 per cent of large AI models in use today are developed in the United States or China. When key decisions rely on systems trained and designed elsewhere, efficiency can come at the cost of intellectual and institutional independence. This is not an argument against artificial intelligence. It is an argument against blind trust.

AI works extremely well in clearly defined tasks. Problems arise when success in such areas is assumed to translate automatically into governance and social decision-making - areas shaped by uncertainty, competing values, and moral choices. Studies show productivity gains of around 10-20 per cent in routine cognitive work, but results are far less clear where judgment and accountability matter most. The real policy challenge is not to slow down AI adoption, but to use it more carefully.

High-stakes AI systems should make uncertainty visible, not hide it. Their outputs should be treated as informed suggestions, not final verdicts. Public institutions need in-house expertise to question and test these systems, not just deploy them. Most importantly, responsibility must remain clearly with humans, even when machines assist.

India's challenge in the AI era is not to make faster decisions, but to ensure that speed does not replace understanding. A society that cannot explain its decisions - even when they are backed by impressive statistics - is not becoming more rational. It is becoming more automated. And automation without care is not progress.

60%
Of governments use algorithm-based systems for welfare delivery and fraud detection
' World Bank

Nishant Sahdev is a theoretical physicist at the University of North Carolina at Chapel Hill, an AI advisor, and the author of the forthcoming book Last Equation Before Silence

"Exciting news! Mid-day is now on WhatsApp Channels Subscribe today by clicking the link and stay updated with the latest news!" Click here!
Artificial Intelligence Technology Technology News Lifestyle news culture news
Related Stories