Daniel Leeds – Fordham Now https://now.fordham.edu The official news site for Fordham University. Fri, 19 Apr 2024 16:57:42 +0000 en-US hourly 1 https://now.fordham.edu/wp-content/uploads/2015/01/favicon.png Daniel Leeds – Fordham Now https://now.fordham.edu 32 32 232360065 Brain Research Links Vision and Intelligence https://now.fordham.edu/science/research-suggests-working-the-brain-works/ Thu, 07 Dec 2017 16:38:44 +0000 https://news.fordham.sitecare.pro/?p=81337 Sarah Cavanagh works with Daniel Leeds on research fostered by the 2017 Columbia and NYU research fellowships.As people age, they may slow down physically, but that doesn’t necessarily mean their thinking may be less. Preliminary research by Daniel Leeds, Ph.D., assistant professor of computer and information science, suggests being a keen thinker may be a greater indicator of how one’s brains processes information than one’s age may be.

“Cognitive decline is not simply a function of age; it is helpful to understand what parts of the brain are connected to changes in thinking and seeing,” he says.

A Collaboration with Columbia

Leeds, who specializes in computational neuroscience, is one of six Fordham faculty working this year as part of the Columbia and NYU Research Fellow Forum. His recent research has been greatly aided by access to data from the lab of Yaakov Stern, Ph.D., professor and chief of the Neurocognitve Science Division at Columbia University Medical Center.

Leeds’ research on computer data and brain activity is regularly mixed in with artificial intelligence research. He’s particularly interested in the connection of the brain to eyesight. In the lab, Leeds develops computer models of cognition and how the brain thinks, particularly how it represents and stores visual information.

“Here at Fordham we have the resources to do the computational work, and we have great faculty in computer and information sciences, psychology and biology. Columbia has similarly excellent scholars. In terms of big sets of data, Columbia has very great resources and state-of-the-art brain scanners,” says Leeds. “I’m very fortunate that the fellowship has supported our access to Dr. Stern’s lab; it has been a great symbiotic relationship.”

Not only has Leeds benefited, but so have junior Sarah Cavanagh, an integrative neuroscience major, and Caleb Hulbert, a master’s candidate in computer science. Through the Fordham-Columbia fellowship, the two students were able to assist Leeds over the summer, and they continue to work with him in analyzing the data.

Processing Visual Information

Different parts of the brain represent visual information differently, Leeds says. Representation of straight lines or edges is most clear at the back part of the brain. Another area of the brain shows more active involvement in perception of written language, and yet another area may best perceive curvy objects, such as faces.

Stern’s lab at Columbia is working on how people perform cognitive tasks from the ages 20 to 80, and Leeds has been contributing to their data analysis. Using the data obtained in the Columbia lab, Leeds has also homed in on his specialties, vision and cognition.

While his research is still in the very early stage, he has observed that cognitive ability is more a factor of how neural pathways represent visual information, rather than how the body ages. For example, an avid reader may continue to read well into old age and, at the same time, may have problems seeing the edge of a table—depending on which neural pathway is more preserved.

Give-and-Take with Artificial Intelligence

The growing collaboration among scientists creating computer models and those researching neuroscience has obvious implications for artificial intelligence, says Leeds, but there’s an ongoing debate as to what are the proper relations between the two branches of science.

“I think that artificial intelligence and neuroscience provide great information back and forth,” he said, because by understanding the brain, computer scientists can develop better programs for artificial intelligence, and vice versa.

“I often take A.I. computer models that are already used in computer vison tests and ask how well a model reflects how the brain acts,” he said.

“If artificial intelligence can help us better understand what’s going on in the brain, and how it can go wrong when different elements are affected, then potentially we can do an intervention on the brain system level, or even a lower-level intervention with behavioral therapy. That’s a good thing.”

]]>
81337
Is it a Chair or a Dog? What the Eye Sees, the Brain Confirms https://now.fordham.edu/science/is-it-a-chair-or-a-dog-what-the-eye-sees-the-brain-confirms/ Tue, 01 Sep 2015 14:00:45 +0000 http://news.fordham.sitecare.pro/?p=26616 Your neurons do not lie. If you see something specific to them, they will light up brightly on the screen of an fMRI machine.

But researchers don’t know exactly why some neurons prefer some images more than others.

Daniel Leeds, PhD, assistant professor of computer and information science, is exploring just how that happens.

This past spring, Leeds conducted fMRI scans of the brain of a test subject at Carnegie Mellon University in Pittsburgh.

The subject was shown a series of real world images over an 80-minute period, while Leeds recorded which neurons in a specific part of the subject’s brain “lit up” with activity.

The study was a continuation of a larger one that Leeds conducted with scientists of Carnegie Mellon in 2012 and published last year in the journal Frontiers in Computational Neuroscience. In his work, Leeds uses a computer program that monitors the brain responses while subjects are observing the images and selects new images in real time to show them.

He and his collaborators were interested in specific visual properties that activate a community of neurons in a few cubic millimeters in the brain. They chose that specific part of the brain because it’s an area where our visual pathway becomes “more sophisticated,” he said.

“You have pixel detectors, effectively, in your eyes, and then you have edge detectors at the beginning of the pathway in your brain,” he said. “And then there are more complex representations as you go along the visual stream.”

However, once past the edge detectors, said Leeds, scientists are a lot more uncertain just what visual properties the brain is using. That’s where Leeds’ research lies—at the intermediate areas after the relatively simple edge detectors, “but before you get to the holistic level of, ‘I see a dog, or I see a chair.’”

Much research has already been done to identify where in the brain our vision is processed. Leeds’ research determines the visual and mathematical principles certain brain regions are using to understand pictures.

Helping Computers “See” Better

He said this new research should help scientists write better algorithms for computers to “see” like human brains do.

One way computers can ‘see,’ Leeds said, is via multilayer artificial neural networks.

“The first layer takes input from pixels, and then it produces its response to simple patterns in those pixels. Then another layer takes output from the bottom layer, comes up with another representation, and communicates it to a higher layer. And then you continue doing this until you get a rhinoceros or a dog,” he said.

“It gets more complex, but effectively the human brain follows a similar process.”

There are already computer programs that perform this process well, and Google’s image search engine is one example of a program that can automatically recognize what’s in a picture. But computer-algorithm’s “seeing” abilities still have limitations, said Leeds. For example, Ticketmaster and other commercial sites successfully use captchas, or image tests, to weed out bots or robotic viewers from humans on the Internet.

The data that came out of the experiment in the spring is still preliminary, but Leeds said they’ve learned some things about visual preferences in the brain in their ongoing work. For some brain regions they researched, statues on big rectangular pedestals sparked more excitement than statues without pedestals. For other brain regions, shiny surfaces or jagged surfaces were exciting.

“Understanding what types of visual information is important to brain regions helps us understand how the brain approaches the task of “seeing” objects,” said Leeds.

Leeds said the new program he has written to analyze brain data has improved his team’s understanding of vision in the brain. He and his team are working on publishing new results now.

]]>
26616