Artificial intelligence produces misinformation when asked to answer medical questions, but there is scope for it to be fine tuned to assist doctors, a new study has found. Researchers at Google tested the performance of a large language model, similar to that which powers ChatGPT, on its responses to multiple choice polls and commonly asked medical questions. They found the model incorporated biases about patients that could exacerbate health disparities and produce inaccurate answers to medical questions. However, a version of the model developed by Google to specialise in medicine stripped out some of these negative effects and recorded a...