- Austria / Österreich
- Bosnia and Herzegovina / Босна и Херцеговина
- Bulgaria / България
- Croatia / Hrvatska
- Czech Republic & Slovakia / Česká republika & Slovensko
- France / France
- Germany / Deutschland
- Greece / ΕΛΛΑΔΑ
- Italy / Italia
- Netherlands / Nederland
- Nordic / Nordic
- Poland / Polska
- Portugal / Portugal
- Romania & Moldova / România & Moldova
- Slovenia / Slovenija
- Serbia & Montenegro / Србија и Црна Гора
- Spain / España
- Switzerland / Schweiz
- Turkey / Türkiye
- UK & Ireland / UK & Ireland
STOCKHOLM, Sweden: Generating 3D representations of dental and maxillofacial structures is a key step in many digital dental workflows; however, whether involving manual or semi-automated segmentation, the process can be time-intensive and prone to observer bias. Seeking to address these concerns, researchers in Belgium and Sweden have trained and evaluated a cloud-based platform for the automated segmentation of impacted maxillary canines in CBCT images. They reported that the tool achieved consistent and precise results much faster than experts did.
The cloud-based platform was trained using a convolutional neural network, a computational model that learns to identify dental conditions and anomalies from patterns and features within images. A total of 100 CBCT images featuring impactions of maxillary canines were used—50 to train the model and 50 to assess its performance. An online cloud-based platform previously trained to segment multiple dental and maxillofacial structures, Virtual Patient Creator, was used for both tasks, and the model’s performance was evaluated against semi-automated segmentation performed by experts, comparing each individual pixel in 3D space and the object shapes in the images.
According to the study results, the automated tool provided consistent and accurate segmentation of impacted maxillary canines with various angulations. “The performance of the model was comparable to that of [semi-automated segmentations] performed by clinical experts,” the researchers wrote. They added: “It is noteworthy that the model showed 100% consistency without the issue of human variability, where it was able to produce identical results when segmenting the same case multiple times. Moreover, only minor refinements were required which confirmed high similarity between [automated segmentation] and [semi-automated segmentation].”
The model also performed the segmentation rapidly. It required an average of 21 seconds to perform the automated segmentation of impacted canines, compared with an average of 582 seconds required for semi-automated segmentation, making it 24 times faster.
The study, titled “Deep learning driven segmentation of maxillary impacted canine on cone beam computed tomography images”, was published online on 3 January 2024 in Scientific Reports.
Tags:
Monday, 27. January 2025
18:30 CET (Oslo)
Praktische Tipps zur Verbesserung der Honorarsituation
Monday, 27. January 2025
19:00 CET (Oslo)
Post-Orthodontic Clean-Up: Tradition and Innovation, Ergonomics, and the Essential Care for Enamel and Dental Aesthetics After Fixed Braces and Clear Aligner Therapy
Wednesday, 29. January 2025
18:00 CET (Oslo)
Why should every dentist integrate ISQ into implant treatments?
Wednesday, 29. January 2025
21:30 CET (Oslo)
Anticavity Therapy with Curodont Repair Fluoride Plus
Monday, 3. February 2025
18:30 CET (Oslo)
Chairsideleistungen : Verschenken Sie kein Geld !
Wednesday, 5. February 2025
18:30 CET (Oslo)
Anticavity Therapy with Curodont Repair Fluoride Plus
Wednesday, 12. February 2025
01:00 CET (Oslo)
To post a reply please login or register