What is a “Reverse Image Search”?
Reverse image search is a technique used to search for images based on an existing image. Instead of using text to search for images, the search engine uses an image to find visually similar images. This technology allows users to find information about an image, including its source, copyright information, other related images and further description about the image they are trying to search.
Potential Use Cases of Image Searching
- Identifying the source of an image e.g. finding the online store providing the image
- Finding similar images
- Collecting additional information on the image e.g. context, descriptions and more
- Identifying objects and landmarks
Note: The list above is not exhaustive hence, there are many ways to use the technology.
Performing a Reverse Image Search
Using an AI-generated image, I will use Google Lens to do a quick uniqueness check of my image. If it is unique, the search results should be limited and extensive otherwise.
There are various alternatives to Google Lens such as Bing’s version integrated into the search engine in a similar fashion or tineye which is a website that allows you to paste URL’s as well as upload from device.
The image to be used:
prompt: “grainy marvel comic book style design of toxic jawbreaker sweet oozing wishes of fortune and silly dreams floating among a sea of identical jawbreakers. Cartoon vector art. Thick outlines, hand drawn texture, exaggerated shapes and proportions .Abstract. Grunge nostalgiacore cartoon dark.Antonucci and Mary Blair art style.”
Note: The prompt is for the original image. The image above has been edited but the core visual elements largely remain the same.
The original image:
The Search — Google Lens
You can either search Google by clicking the camera in the search bar or select lens option within photos on your pixel. I done the latter. There might be other alternatives but those are the obvious ways to perform a search using Google Lens.
The search results appear almost instantly which highlights the immense capability of the image search. However, it could make you question how extensive the search is. Is that a valid question to ask?
Equation of Life: The longer it takes = The better results expected
The Search Results
The search outputs what are called ‘Visual Matches’.
They were indeed ‘Visual Matches’!
The results have mild similarity in terms of repeating objects. Some results consist of treats but some are off topic completely and contain baseballs.
A commonality is the style. Every image has a cartoon, abstract collage look to them.
The search was able to find green lookalikes but the green is slightly off the original. Definitely impressive nonetheless.
The batch of results above are more focused on the image content which was an abstract assortment of random items. That is exactly what Google Lens found. It would be interesting to know how exactly it recognised and classified that property of the image. The property has a high impact or weight applied to the search results due to the fact most images found exhibit a similar property. Not identical but similar.
Also, it is clear that colour has a low weight applied to its impact on search results but it is not completely ignored.
Google Web Search Result
You can also observe web results for the reverse image search which produced vastly different results.
It tried to find the colour and it appears to have sacrificed image content in the process. A high proportion of images are realistic compared to the original which was a cartoon. Or at least had a cartoon style to it.
Also, the images have a significantly different context to the original image. At least the Google Lens results correctly matched that I was looking for cartoon art images and not just any green collection of items which could be medical pills, rocks and so on. You could say it matched image too precisely. Very computer-like!
Conclusions
It performs impressively but still room for improvement.
The uniqueness check of my image showed that while there where images portraying a similar concept, my image was mostly unique in the sense there were no direct lookalikes.
It will be fascinating to see how image classification and matching develops in the future. Will developers be able to strike a perfect balance of image subject, style, context and other features to find only close lookalikes?