We’ve all laughed at crime TV shows/movies where investigators would try to “enhance” blurry photo evidence as if it were that easy. Well, thanks to Google, such tech would actually become a reality.
The new Google AI photo upscaling tech works pretty much exactly as the name suggests. Google’s blog post about it has the title “High Fidelity Image Generation Using Diffusion Models”.

Google‘s Brain Team was able to develop an image super-resolution, where it utilizes a trained machine learning model that can turn blurry, low-resolution photos into a clearer, high-resolution image.
While authorities can indeed use it to solve crimes, it’s primarily poised for family photo restoration, medical imaging improvement, and other less sci-fi things.

As you probably know, it took Google years to develop this technology. In fact, Google first proposed the diffusion models concept way back in 2015. Albeit, they’ve been held back by a group of deep learning methods named “deep generative models”.
The folks at Google then figured out that the results they were able to get are superior to current technologies when humans are asked to make the judgment.

Google made a detailed explanation about the first approach called SR3, or Super-Resolution via Repeated Refinement:
SR3 is a super-resolution diffusion model that takes as input a low-resolution image, and builds a corresponding high resolution image from pure noise,
The model is trained on an image corruption process in which noise is progressively added to a high-resolution image until only pure noise remains.
It then learns to reverse this process, beginning from pure noise and progressively removing noise to reach a target distribution through the guidance of the input low-resolution image.
After the SR3 was proven effective, Google stepped it up with a second approach called CDM or Class-Conditional Diffusion Model.
CDM is a class-conditional diffusion model trained on ImageNet data to generate high-resolution natural images,
Since ImageNet is a difficult, high-entropy dataset, we built CDM as a cascade of multiple diffusion models. This cascade approach involves chaining together multiple generative models over several spatial resolutions: one diffusion model that generates data at a low resolution, followed by a sequence of SR3 super-resolution diffusion models that gradually increase the resolution of the generated image to the highest resolution.
You can read the full Google AI blog here.
This is a great opportunity to influence the improvement of the quality of your content. Fortunately, anyone can do this now, using only an online tool like imageupscaler