- Austria / Österreich
- Bosnia and Herzegovina / Босна и Херцеговина
- Bulgaria / България
- Croatia / Hrvatska
- Czech Republic & Slovakia / Česká republika & Slovensko
- France / France
- Germany / Deutschland
- Greece / ΕΛΛΑΔΑ
- Italy / Italia
- Netherlands / Nederland
- Nordic / Nordic
- Poland / Polska
- Portugal / Portugal
- Romania & Moldova / România & Moldova
- Slovenia / Slovenija
- Serbia & Montenegro / Србија и Црна Гора
- Spain / España
- Switzerland / Schweiz
- Turkey / Türkiye
- UK & Ireland / UK & Ireland
STOCKHOLM, Sweden: Generating 3D representations of dental and maxillofacial structures is a key step in many digital dental workflows; however, whether involving manual or semi-automated segmentation, the process can be time-intensive and prone to observer bias. Seeking to address these concerns, researchers in Belgium and Sweden have trained and evaluated a cloud-based platform for the automated segmentation of impacted maxillary canines in CBCT images. They reported that the tool achieved consistent and precise results much faster than experts did.
The cloud-based platform was trained using a convolutional neural network, a computational model that learns to identify dental conditions and anomalies from patterns and features within images. A total of 100 CBCT images featuring impactions of maxillary canines were used—50 to train the model and 50 to assess its performance. An online cloud-based platform previously trained to segment multiple dental and maxillofacial structures, Virtual Patient Creator, was used for both tasks, and the model’s performance was evaluated against semi-automated segmentation performed by experts, comparing each individual pixel in 3D space and the object shapes in the images.
According to the study results, the automated tool provided consistent and accurate segmentation of impacted maxillary canines with various angulations. “The performance of the model was comparable to that of [semi-automated segmentations] performed by clinical experts,” the researchers wrote. They added: “It is noteworthy that the model showed 100% consistency without the issue of human variability, where it was able to produce identical results when segmenting the same case multiple times. Moreover, only minor refinements were required which confirmed high similarity between [automated segmentation] and [semi-automated segmentation].”
The model also performed the segmentation rapidly. It required an average of 21 seconds to perform the automated segmentation of impacted canines, compared with an average of 582 seconds required for semi-automated segmentation, making it 24 times faster.
The study, titled “Deep learning driven segmentation of maxillary impacted canine on cone beam computed tomography images”, was published online on 3 January 2024 in Scientific Reports.
Tags:
Monday, 29. April 2024
18:30 CET (Oslo)
Root caries: The challenge in today’s cariology
Tuesday, 30. April 2024
19:00 CET (Oslo)
Neodent Discovery: Neoarch Guided Surgery—from simple to complex cases
Friday, 3. May 2024
19:00 CET (Oslo)
Osseointegration in extrēmus: Complex maxillofacial reconstruction & rehabilitation praeteritum, praesens et futurum
Wednesday, 8. May 2024
02:00 CET (Oslo)
You got this! Diagnosis and management of common oral lesions
Friday, 10. May 2024
02:00 CET (Oslo)
Empowering your restorative practice: A comprehensive guide to clear aligner integration and success
Monday, 13. May 2024
15:00 CET (Oslo)
CREATING MORE PRACTICE TIME THROUGH EFFICIENCY: IMPROVED ACCURACY AND DELEGATION
Monday, 13. May 2024
19:00 CET (Oslo)
To post a reply please login or register