Camera technology is incredibly advanced nowadays, but it has nothing on what the universe can do. When scientists want to study objects that are too far away to be seen with human technology, they turn toward a unique phenomenon called gravitational lensing. This neat video will show you what gravitational lensing is and why it is so useful to observing the early universe.
Coming to you from PBS Space Time, this interesting video will show you what gravitational lensing is and how it works. First correctly predicted by Einstein’s Theory of General Relativity, a gravitational lens is created when a large bunch of matter passes between a light source and an observer, causing spacetime to be warped to such a degree that light is significantly bent and the image behind the matter is magnified. How much magnification there is depends on the individual parameters, but it can be significant; for example, in 2018, using the Hubble Space Telescope, University of Hawaiʻi researchers found that the galaxy eMACSJ1341-QG-1 was magnified by a factor of approximately 30 by the galaxy cluster eMACSJ1341.9-2441. It’s a remarkable phenomenon that allows us to study faraway areas of the universe in finer detail than otherwise possible. Check out the video above for the full rundown.
AI researchers at Microsoft reached a major milestone this week: they managed to create a new “artificial intelligence system” that is, in many cases, actually better than a human at describing the contents of a photo. This could be a huge boon for blind and sight-impaired individuals who rely on screen readers and “alt text” when viewing images online.
While this might seem like one part of the prequel to Skynet, the development of a better image captioning AI has a lot of potential benefits, and warrants a bit of (cautious) celebration. As Microsoft explains on its blog: “[this] breakthrough in a benchmark challenge is a milestone in Microsoft’s push to make its products and services inclusive and accessible to all users.”
That’s because accurate automatic image captioning is used widely to create so-called “alt text” for images on the Internet—that’s the text that screen readers use to describe an image to sight-impaired individuals who rely on these accessibility options to make the most of their time online or when using certain apps on their smartphones.
Of course, Microsoft is careful to point out that the system “won’t return perfect results every time.” But as you can see from the examples in the video below, it’s far more accurate than the previous iteration. There’s a wide gulf between describing an image as “a close up of a cat” and describing that same image as “a gray cat with its eyes closed.”
“Ideally, everyone would include alt text for all images in documents, on the web, in social media – as this enables people who are blind to access the content and participate in the conversation. But, alas, people don’t,” explains Saqib Shaikh, a software engineering manager for Microsoft’s AI group. “So, there are several apps that use image captioning as way to fill in alt text when it’s missing.”
These apps can take advantage of the new system to generate accurate captions that “surpass human performance,” a claim that’s based on the nocaps image captioning benchmark that compares AI performance against the same data set captioned by humans.
Here’s another example of the improved AI in action, pulled from the video above:
Given the potential accessibility benefits of the improved captioning system, Microsoft has rushed this model into production and has already integrated it into Azure’s Cognitive Services, enabling interested developers to begin using the tech right away.
To learn more about this system and how it works, head over to the Microsoft blog or read up on the nitty gritty details here. Suffice it to say this isn’t exactly Skynet, but we can be pretty sure that future Terminators will be able to describe your photo library better than you can…
Photographer and “mad scientist” Don Komarechka is back for a DPReview TV episode on ultraviolet light. Specifically, he explains how a modified camera-and-filter combination can reveal hidden ultraviolet patterns that are invisible to the human eye, but crucial for pollinators like bees.
Human trichromatic vision is limited to the so-called “visible” portion of the electromagnetic spectrum, but the spectrum doesn’t simply stop at those boundaries. Immediately adjacent to the visible light spectrum is near-infrared and infrared on one end, and ultraviolet on the other, both of which can be captured using specially-modified cameras.
We’ve featured infrared photography many times before, but in this video, Komarechka heads over to the other end to reveal the hidden world of ultraviolet light. Specifically, he shows you the hidden patterns that pollinators like bees use to home in on certain flowers. The results can be downright shocking:
From streaks leading to the pollen source, to big fat ultraviolet landing pads, these patterns are completely invisible to the naked human eye, but they play a crucial role in the plant kingdom. As Komarechka demonstrates over and over, much like macro photography, a special ultraviolet camera-and-filter combination can reveal beautiful new worlds that have been hiding in your backyard or garden all along.
As a bonus, the camera can also reveal the protective power of sunscreen in pretty stark terms… a trick that has gone viral a few times before:
To learn more about ultraviolet light and ultraviolet photography, and see many more fascinating before-and-after photos of “monochromatic” flowers that aren’t, check out the full DPReview TV episode above.
And if you want to see more uses for ultraviolet photography, check out these ultraviolet portraits we featured back in 2017.
Australian fine art photographer Leila Jeffreys has been shooting studio portraits of birds since 2008. In addition to capturing the beautiful plumage across various species, Jeffreys also shows how birds can have expressions that are strangely humanlike.
“I’ve long noticed how many birds have specific expressions, just like us”, the photographer says.
Jeffreys has spent years researching and exploring the world of birds alongside conservationists, ornithologists, and sanctuaries. After finding her subjects, she works to develop an “intimate” relationship with them before they go before her camera.
“[Jeffreys] is best known for visceral and mysterious images of birds that explore and subvert the traditions of portraiture,” writes Australian writer Neha Kale. “Her avian subjects are photographed at human scale with a startling attention to color, line, form and composition.
“For Jeffreys, birds are both medium and message. Her practice opens windows into critical questions about the shared anthropomorphism that connects humans with animals, the sense of wildness that tugs at the fringes of everyday existence and the fleeting and precious connections that bind us to the natural world.”
For her latest project and exhibition, titled High Society, Jeffreys photographed budgies in pairs and groups to show the flock societies birds create.
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.