top of page
  • Michael Barrett, Karl Prince, Anmol Arora, Edwin Lee, Eivor Oborn

AI futures for professional work: Reimagining both benefits and potential risks

In an era of AI Futures, ophthalmologists as risk professionals are reimagining the future of transforming eye services through AI and telemedicine in becoming a digital organization. There is significant potential benefit for using millions of images and scans to train algorithms for diagnosis of other eye diseases. And ofcourse ChatGPT 4 makes this even easier! The performance of AI use in ophthalmology is potentially as good yet faster and more sustainable than human ophthalmologists. Algorithms if used autonomously don’t get tired.  Their use could help catapult widespread benefits of reduced costs, waiting times, and increased productivity across the health system. However, alongside these benefits there are risks associated with information governance and informed consent of patients so that doctors ensure they ‘do no harm to patients’. Further, as professionals, ophthalmologists are ethically expected to maintain a degree of altruism in their work.

 

To maintain these professional principles in an AI era, challenges related to algorithmic bias need careful reflection. Algorithmic bias is fundamentally caused by there being a lack of digital data for marginalised patient groups, perpetuating digital inequalities by increasing the level marginalization. Recent research has noted that AI risks systematically underperform for marginalised patients, who are not adequately represented in the data used to train the algorithms. One study constructed a dataset of retinal fundus photos that specifically excluded dark-skinned patients and trained a machine learning algorithm on this biased dataset to detect diabetic retinopathy. They found that the machine learning algorithm was only 60.5% accurate at detecting retinopathy in dark-skinned patients, compared to 73.0% accurate in light-skinned patients. There have been attempts to overcome such algorithmic bias in ophthalmology using synthetic data to create data corresponding to marginalised populations for training algorithms. Such rebalancing of the dataset improved the algorithm effectiveness on dark-skinned patient images, increasing the accuracy of diagnosing diabetic retinopathy from 60.5% to 71.0% in dark-skinned patients.

 

Whilst this method of generating synthetic data to balance datasets is promising, it still requires that there is an existing set of data available for the marginalised groups from which to resynthesise more images. Furthermore, images acquired by low-cost devices in low-middle income countries may not be comparable with higher quality photos obtained from high-income countries, compromising training datasets. Using generative AI also raises concerns that synthetic medical records, whether imaging or tabular data, could be used as a mechanism of evading data privacy legislation? As far as professional work is concerned, challenges remain. What are the implications for technicians, graders who support the specialist work of ophthalmologists? How might we expect a duality of risk associated with generative AI be harmful and/or of benefit and what professional work is likely to be impacted? 

 

In conclusion, a lot of contemporary professional work, while engaging with risk, is increasingly facing a more complex riskscape involving overlapping and contiguous risks. Professionals need to navigate the different risk interactions and use technology at work in new ways.  In re-imagining the future, professionals need to balance and work with the duality of benefits and harms associated with digital transformation in navigating the emerging riskscape.

Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page