Skip to primary content

Automatic audiogram classification

Full title: Automatic classification of audiograms using an AI algorithm.

Project period

Start: 2020
End: 2024

Aim

The project aims to use AI to recognise audiogram patterns. The aim is to recognise at least 8 to 10 of the most common audiogram subtypes. The audiogram subtypes represent different underlying pathologies, which are clinically relevant in order to determine further investigations and treatment of patients. An automatic audiogram classification is time-saving and ensures consistency in the determination of audiogram subtypes.

The results and developed methodologies in this project using AI have the potential to be used in the project “User-operated Audiometry (UAud)”, where patients can measure their own hearing thresholds without the assistance of a technically skilled person. UAud will be able to benefit from an AI algorithm in order to assist in diagnosing audiograms and different types of hearing loss.

The project is built on data collected in the project: “Hearing loss and dementia: Towards a better understanding of the underlying mechanisms”. 

Participants

Funding

The project itself has not received specific funding yet, but the collaboration projects “Hearing loss and dementia: Towards an understanding of the underlying mechanisms” has received funding from William Demant Foundation and UAud is funded by the Innovation Foundation Denmark, William Demant Foundation as well as the collaborating partners including Demant (Interacoustics and Oticon), SDU and OUH.

Jesper Hvass Schmidt

Jesper Hvass Schmidt

Chief Physician, PhD

Odense University Hospital, Department of ORL - Head & Neck Surgery


(+45) 3055 9991
APPFWU02V