Media Kampung – 26 Maret 2026 | A new study published in Radiology shows that radiologists often fail to recognize AI‑generated chest X‑rays. The investigation examined whether experts could distinguish genuine scans from those produced by large language models, revealing unsettling results.

The experiment involved seventeen board‑certified radiologists who were asked to diagnose patients based solely on images presented to them. Synthetic scans were created by prompting ChatGPT with anatomical location, disorder and noise level, yielding images that resembled clinical X‑rays.

When participants first reviewed the images, only forty‑one percent sensed that something was amiss. The majority proceeded with diagnoses as if the pictures were authentic, highlighting the visual fidelity of the deepfakes.

After being informed that some images might be fabricated, the radiologists improved their performance, correctly labeling seventy‑five percent of the cases. Despite this increase, a quarter of the assessments remained inaccurate, indicating persistent vulnerability.

The generation process required merely textual prompts, yet the output matched the quality of models trained on millions of medical images. Researchers noted that the ease of creation raises concerns about widespread misuse in clinical settings.

Four multimodal AI systems, including the one that produced the fake scans, were tested for detection capability. Their success rates varied from fifty‑seven to eighty‑five percent, confirming that current AI tools cannot reliably flag their own forgeries.

Lead author Mickael Tordjman warned, ‘If you give back to ChatGPT the same image, it won’t be able to say for sure this is AI and this is not AI,’ calling the limitation ‘disturbing.’ His comment underscores the paradox of AI‑generated content and AI‑based verification.

While no documented incidents of deepfake radiographs disrupting patient care have emerged, experts fear the potential for diagnostic errors and legal liabilities. The risk becomes more acute as generative models become publicly accessible.

Two senior radiologists wrote an editorial emphasizing that unchecked deepfake technology could erode public confidence in medical institutions. They urged immediate development of detection standards and regulatory oversight.

The broader medical community is now debating how to integrate AI safeguards without stifling innovation. Proposals include watermarking generated images, mandatory provenance metadata, and continuous training of clinicians on emerging threats.

In a separate AI‑related episode, the social‑media platform X became the center of a viral confusion over a post dated 1992. Users shared a screenshot claiming the tweet predated the platform’s launch, sparking widespread debate.

The post, originally from 2011, displayed an erroneous timestamp that appeared to be September 2, 1992, prompting speculation about time‑traveling content. X’s built‑in AI assistant Grok intervened to explain the anomaly.

Grok responded that the date was a display glitch caused by legacy timestamp handling, noting the actual posting time was December 11, 2011 at approximately 10:02 JST. The AI clarified the technical cause without resorting to sensationalism.

Technical analysis revealed that the original Unix millisecond timestamp (1,323,565,370,000) overflowed a 32‑bit unsigned integer, resulting in an incorrect conversion to seconds since 1970‑01‑01. This overflow produced the misleading 1992 date.

The incident illustrated how AI can both expose hidden software bugs and inadvertently amplify misinformation when users misinterpret its explanations. Clear communication remains essential to maintain trust.

Both the deepfake X‑ray study and the X timestamp glitch demonstrate a common theme: AI systems excel at creating convincing artifacts but often lack self‑awareness to identify their own errors. This duality challenges regulators and practitioners alike.

Experts advocate for a multi‑layered defense, combining human expertise, AI‑driven detection algorithms, and transparent reporting mechanisms. Such an approach aims to prevent malicious exploitation while preserving the benefits of generative technology.

As generative AI continues to permeate healthcare and social media, vigilance and proactive policy will determine whether society can harness its potential without compromising safety or credibility. Ongoing research and interdisciplinary collaboration are vital to staying ahead of emerging threats.

Artikel ini dipublikasikan oleh Media Kampung.