All posts
AI

What Science Says About the Misuse of AI, and How to Avoid It

March 2, 2026 5 min read
Information Title

Elicit is an AI used in academia to gather the existing research papers relevant to a question and summarise them into a report. I used this to investigate what the research says about misuse of AI. This blog post is my attempt to take the large report with its academic jargon and extract the most practical insights for developers. For those interested, you can view the original report generated by Elicit AI along with the prompt here. It includes the prompt and some things not covered in this post, such as how to implement AI well at an organisation level. Some studies this blog may seem a bit doomsdayish, such as one study showing a 75% reduction in critical thinking. However, it’s important to note that these studies are about people misusing AI, and also some of the more shocking studies were done on students – so the negative impacts of AI may not be as bad for developers, especially those avoid misusing AI.

Key takeaways: 1. Over-reliance (trusting AI responses too much without evaluation) was the most common way AI is misused. This isn't about using AI frequently - it's about how it's used. 2. Over-reliance can be a dangerous cycle - it reduces critical thinking skills, which makes you need AI more, which makes you over-rely on it even more. Therefore, simply using AI more doesn't mean you will increasingly use it better – it can be the opposite if you’re not using it wisely. 3. Misuse of AI can reduce critical thinking and motivation. 4. The more people become fatigued by AI, the more they start accepting wrong responses. Fortunately, there are some ways to mitigate this. 5. The greatest predictors of how well someone uses AI is a strong knowledge of AI limitations, a good domain knowledge, and being a bit skeptical of automation. Whereas the ones most at risk of AI over-reliance are people with moderate knowledge of AI and high optimism about automation. 6. Having AI create counterfactuals is suprisingly powerful

How AI is Misused

There’s 5 Main Ways AI is Misused

1. Over-reliance (accepting AI outputs without critical evaluation) 2. Inappropriate delegation (using AI for unsuited tasks) 3. Attribution errors (presenting AI work as original) 4. Procedural shortcuts (bypassing verification steps) 5. Under-reliance (not using AI when beneficial)

Over-Reliance is the Most Common Way AI is Misused

The most common AI misuse pattern in work contexts appears to be over-reliance—in particular accepting AI outputs without properly evaluating them. A study on doctors found that their accuracy halved when relying too much on AI, and this was even worse for newer doctors.

The Effects of AI Misuse

As mentioned, accuracy can drop significantly when people rely too much on AI. This wasn’t only limited to doctors. Multiple studies found this across different areas. Interestingly, some research suggests accuracy may drop more in people with less experience or who don’t have much confidence in their own opinions, as they assume AI probably know better than them, even when it doesn’t. So if you have low confidence this might be something to watch out for. Flow in programming is reduced and boredom is increased through misuse/overuse. A great irony given that programming with AI is often described as "vibe coding". Multiple studies have shown reduced critical thinking skills when AI misused, and some research shows people struggling more with motivation and decision making. It’s easy to become locked in a cycle where over-reliance increases dependance on AI, which encourages even more reliance.

The Good News – There are Things You Can Do

The most consistent predictor of appropriate AI use is individual attitudes toward automation

Develop a Strong Knowledge of AI, and a Healthy Skepticism About It

According to the report, "The most consistent predictor of appropriate AI use is individual attitudes toward automation: skeptical users achieved higher accuracy and detected errors more reliably, while those favorable toward automation exhibited dangerous overreliance”. The skepticism towards AI isn’t just a technical skill but a general attitude which the report suggests companies should try to foster. It also found that people with moderate AI knowledge are actually the most at risk to over-rely. People with little knowledge of AI usually didn’t trust it enough to over-rely on it, while those with a lot of knowledge of it knew its limitations and therefore also avoided over-relying on it. If you’re not sure where to start, I recommend having a look at the courses and articles by Anthropic/Claude. Their AI fluency foundations course is a great starting point and something. It teaches a framework for using AI which is designed to let people use AI well longterm, even which tools and prompting techniques change. I recommend to everyone regardless of if you are new to AI or have been using it daily for a long time.

Have AI Create Counterfactuals

One other thing which is shown to help is having AI give a counterfactual. These are also known as “what-if” questions as it asks when a given response wouldn’t be true. To use a real world example, say AI flags some synchronous code as having a “critical” performance risk. Rather than assuming it is correct, you could ask: “What specifically makes you classify this as CRITICAL rather than just medium/high priority? Under what conditions would this be less severe?” This is much more effective than just having AI justify its reasoning, as AI can give overly technical justifications which can easily woo people into thinking it is correct. Of all the advice in the report, I would say that in my personal experience this has been the biggest benefit for the smallest effort especially when combined with asking it for incicators of its suggestion being a good choice. The small change from saying "What do you suggest?" to "What options do you suggest, what would confirm that that is a good choice, and what would indicate that it isn't". In addition I will sometimes give AI an example of what this look like such as: Give me suggestions and include both indicators that your suggestion is a good choice and also counterfactuals which suggest it isn't. For example: "I would suggest X, especially if you need this to scale but it may not be the best option if you need this to be done ASAP. Alternatively I would suggest Y, especially if ___, but not if ___."

Have a Good Domain Knowledge

If you know the topic well, it’s easier to spot responses which don’t seem right. Also, people with less knowledge or experience often trust their opinions less and therefore sometimes assume AI is right even when its answer feels off.

Having Some Confidence in Your Own Opinions

There has been a lot of research recently about AI being “sycophantic” - meaning that it uses flattery and excessive agreeableness to boost peoples egos and make them double down on their beliefs. I have no doubt that that is a major issue, however, one study suggests the opposite also happens – people with low confidence in their own knowledge and opinions have large drops in accuracy when using AI as they frequently assume AI is right when it disagrees with them.

Decrease Work Stress and Increase Engagement with AI

Increasing engagement with AI while reducing mental effort is shown to encourage better AI usage – this applies both at a personal and a company level, so it’s relevant to both developers and managers. This applies in a few ways: - Most of us have experienced frustrations while working with AI. These frustrations become fatiguing, and that makes people start accepting poor responses even when it slows them down in the long term. It may be helpful to take note of the types of tasks with AI which engage you and the ones which drain you, and lean into the engaging ones when possible. - While some companies push employees to use AI more, research shows that pushing people to use AI may make them more likely to start to over-rely AI and accept more incorrect responses, which actually reduces job performance. - If you add additional work pressures on top of that it just adds fuel to the fire, with people becoming too quick to accept what AI says, and then having it come back to trip them up later on.