Department of Computer and Information Science – Fordham Now https://now.fordham.edu The official news site for Fordham University. Tue, 03 Dec 2024 17:35:32 +0000 en-US hourly 1 https://now.fordham.edu/wp-content/uploads/2015/01/favicon.png Department of Computer and Information Science – Fordham Now https://now.fordham.edu 32 32 232360065 AI and Cybersecurity: Grant Funds New Teaching Tools https://now.fordham.edu/science-and-technology/ai-and-cybersecurity-grant-funds-new-teaching-tools/ Wed, 11 Sep 2024 17:31:24 +0000 https://now.fordham.edu/?p=194357 Fordham’s Center for Cybersecurity has secured a $125,000 grant from the United States Department of Defense to create a curriculum focused on AI-enabled cybersecurity tools.

Titled “Enhancing Cybersecurity Education through AI-Integrated Curriculum Development for Faculty,” the year-long grant will fund the creation of 10 teaching modules that will be used by other institutions that teach cybersecurity.

Thaier Hayajneh, Ph.D., director of the Fordham Center for Cybersecurity, said that he and Gary Weiss, Ph.D., professor of computer and information science, will work with academics from other universities and private sector experts to create the coursework. They will hold workshops over the next year to solicit feedback and finish in the fall of 2025.

Threat Detection and Response

Hayajneh said a key focus of this new curriculum will be employing AI for rapid threat detection and response.

Thaier Hayajneh

“What people in the industry are trying to do with AI is automate most of those things that we used to do manually,” Hayajneh said. 

“The readings, the observations, the analytics that we always have been doing—everything has AI being integrated into it,” he said.

“There is now AI-enhanced intrusion detection network security that’s used as a defense. But hackers also use AI to crack passwords and search for vulnerabilities in your system faster than before, so you have to test your systems with traditional attack capabilities but also with AI.”

Teaching the Teachers

The team’s recommended curriculum will be shared with the National Security Agency (NSA) and the Department of Defense for feedback, and the NSA will then make it available to eligible institutions through its digital library. 

“The curriculum is designed for faculty from other institutions, with the goal of bridging the gap between institutions that don’t have the expertise and the capability to develop AI-related cybersecurity courses,” Hayajneh said.

 “The ultimate goal is to teach the teachers.”

The grant is the fourth one of this type that the center has received. In 2017, it was awarded two grants worth $270,000 to develop a cybersecurity core curriculum and help build hands-on lab environments for cybersecurity training. In 2019, it received $300,000 to create a curriculum related to iOS and Android operating systems.

]]>
194357
New Master’s Degree to Combine Economics and Data Science https://now.fordham.edu/colleges-and-schools/graduate-school-of-arts-and-sciences/new-degree-to-enhance-economic-computer-skills/ Wed, 29 May 2024 13:55:07 +0000 https://now.fordham.edu/?p=190687 This fall, Fordham will offer a new master’s degree in Data Science and Quantitative Economics. 

The interdisciplinary degree will give students computational tools and techniques from the field of data science, as well as economic theory and statistical training from the field of economics.

“Many employers want students who can manage and analyze large data sets,” said Johanna L Francis, Ph.D., chair of the economics department.

While similar to the dual MA/MS program currently offered by the departments of Economics and Computer and Information Sciences, the new degree will be a single M.S. It will be offered by the Graduate School of Arts and Sciences. 

Meeting Employer Demand

Francis said the degree was created to meet the employer demand for graduates with expertise in Python, a high-level, general-purpose programming language, and R, a programming language for statistical computing and data visualization.

“There’s software that you can use where you don’t need to have much programming experience, but many employers would prefer students who are able to at least code some of their own analysis.”

The degree, which is the first and only program of its kind currently available on the East Coast, will comprise 10 courses from the economics and data science departments, including three electives. Students will also complete a capstone, internship, or thesis.

Francis noted that economics is still a very traditional liberal arts degree that incorporates political science, history, and psychology but has become much more quantitative. 

Data science offers skills that provide students with a much deeper understanding of algorithms, which is why the dual MA/MS Degree in Economics and Data Science program was first developed in 2021. While students who pursue the dual degree gain a deeper understanding of economic theory and computational methods while having the time and expertise to engage in research projects, this new degree combines the two disciplines even more seamlessly.

“This new degree allows students to do the degree in a year and a half, or a calendar year if they do summer courses, and it is much more intense than the dual degree,” she said. 

Gaining a Competitive Edge

Yijun Zhao, Ph.D., associate professor of computer and information science and program director of the M.S. in Data Science program, said data science students will equally benefit from immersing themselves in the field of economics.

“For data science students looking for jobs, one of the major challenges is that they have the technical skills but lack the knowledge or language of a particular field,” she said.

“This degree will help data science students gain the necessary knowledge in economics, giving them a competitive edge.”

Francis said the emergence of AI large language models, or LLMs, has made the degree like even more valuable.

“Economics is a very analytic discipline with a basis in human behavior, and when you combine it with a knowledge of algorithms that are the backbone of LLMs, you give students a very solid background in problem-solving.”

To learn more, visit the program webpage.

]]>
190687
Student STEM Researchers Awarded Competitive DAAD-RISE Internships in Germany https://now.fordham.edu/colleges-and-schools/fordham-college-at-rose-hill/student-stem-researchers-awarded-competitive-daad-rise-internships-in-germany/ Fri, 21 Apr 2023 15:53:51 +0000 https://news.fordham.sitecare.pro/?p=172241 Two Fordham undergraduates have been selected to participate in DAAD-RISE, a prestigious international program that will allow them to complete a research internship in Germany this summer. The competitive program allows undergraduates to work with doctoral students and researchers in their area of interest at a top German university or research institute. 

This year’s DAAD-RISE recipients, Lindsey Berry and Luisa Rosa, are both juniors at Fordham College at Rose Hill who are pursuing majors in STEM. Since the DAAD-RISE program was established in 2005, Fordham has had 14 recipients, including this year’s. 

“DAAD-RISE is an outstanding opportunity for our undergraduates. Not only do they conduct cutting-edge research over the summer, but they also get to know Germany—meeting fellow research scientists and locals—which helps them in any career they choose to pursue, from science and engineering to medicine and teaching,” said Lorna Ronald, Ph.D., director of Fordham’s Office of Prestigious Fellowships.

Studying How the Human Brain Processes Sound 

Berry, the daughter of U.S. Air Force parents, has lived in five U.S. states and abroad in Germany and is currently an integrative neuroscience major and a German studies minor at Fordham. She conducts research in the University’s EEG Lab for Language and Multilingualism Research, where she studies second language acquisition and processing.

This summer, Berry will join the neuroscience group at Heilbronn University’s Center for Machine Learning, where she will study inattentional deafness—being unable to hear usually audible sounds, like alarms, because you’re too focused on a visual task. One example is texting while walking, and not noticing the sound of a nearby bicycle bell or a car engine. 

“The greater scope of the research that I’ll be doing in Germany is that hopefully it will help people who have suffered brain damage, such as through strokes or traumatic brain injuries. By learning more about why and how inattentional deafness occurs, we can then work towards treatment options and learn more about how the brain filters out and processes certain auditory sounds,” said Berry, who plans on eventually earning her Ph.D. and becoming a psycholinguistics researcher. 

Building a Device That Can Help Athletes in Real Time

Rosa is a computer and information sciences major at Fordham. She conducts research in the University’s Educational Data Mining Lab, where she analyzes data on Fordham student academic performance to understand grading patterns, instructor effectiveness, and future student performance. Rosa, an international student from Brazil, is also a member of the women’s swimming and diving team. 

This summer, Rosa will join a human-computer interaction research group at LMU Munich to develop and evaluate a wearable sensor-based system that can give real-time feedback to athletes and those who want to improve their skills in any physical activity. 

This system could someday help swimmers like herself. Rosa said she has about 30 teammates and two coaches at Fordham. It can be difficult for each athlete to receive individualized feedback during sports practices, she said. However, the system she will work on may have the ability to evaluate a user’s movements—stroke or breathing patterns, for example—and show them how to improve their movements in real time. 

“I think this system has the potential to revolutionize sports, and I’m really excited to be a part of it,” said Rosa, whose long-term goals are to join a company like Google and eventually lead their data science research team.  

]]>
172241
Fordham Earns Grant to Hold Second Cybersecurity Summer Camp https://now.fordham.edu/university-news/fordham-earns-grant-to-hold-second-cyber-security-summer-camp/ Fri, 01 Jul 2022 15:04:14 +0000 https://news.fordham.sitecare.pro/?p=161978 After successfully holding a weeklong summer camp for middle and high school students last July, Fordham was once again awarded a prestigious grant by Gen Cyber, a program jointly run by the National Security Agency (NSA) and the National Science Foundation (NSF), to hold another camp next summer.

The $150,000 grant will fund a camp to be held in July 2023, geared toward students from minority communities in the Bronx.

University Professor Thaier Hayajneh, Ph.D., founding director of Fordham’s Center for Cybersecurity, said the focus of the camp, which will be held in person for the first time, will be on educating students about cryptography and cryptocurrencies.

Thaier Hayajneh, Ph.D., professor of computer science,
Photo by Chris Taggart

The previous camp was held virtually and focused on mobile phone security. Because the pandemic made in-person meetings unfeasible, Hayajneh and camp organizers mailed iPhones and Samsung phones to participants and led them through exercises via Zoom sessions. Students were also mailed t-shirts, gift cards, and food vouchers.

“Throughout the camp, we taught them the principles and the basics of cybersecurity in the beginning, and then they practiced those principles and concepts on an actual system—in that case, it was the smartphones. We didn’t want them to ruin their own phones or their parents’ smartphones,” he said.

To make it more enjoyable, organizers encouraged students to make and exchange videos of themselves conducting exercises with the phones they were sent. An agent from the FBI also shared details from a real case involving smartphone security. In surveys conducted at the camp’s conclusion, many students indicated a strong interest in pursuing a career in cybersecurity.

Cryptocurrency: Good Hype and Bad Hype

In focusing on cryptocurrency for next summer’s camp, Hayajneh said the goal is to shine a light on an area that has both tremendous potential and potential risks.

“Even my 8-year-old has asked me about Dogecoin. Kids hear about it from their friends and want to know how they can buy cryptocurrencies. We don’t want teenagers to fall victims to scams and other malicious activities that could target them in the future,” he said.

In addition to teaching students the basics behind Blockchain, the technology that makes cryptocurrency possible through security, transparency, and traceability, instructors will explain how cryptocurrency can be used for nefarious purposes, such as ransom payments.

“There’s good hype and bad hype about cryptocurrencies, so we’ll introduce the students to it, and also make them aware of the value and risks,” he said.

“Unlike banking, where if you make a bad transaction, you can call your bank and tell them it wasn’t what you did, or they made a mistake, with crypto, once a transaction happens, it’s done. You can never reverse that. So that’s one of the scary things about it.”

The second grant for a new session of camp is further validation from the NSA of Fordham’s leadership in the field, Hayajneh said. In 2017, the NSA designated the University a National Center of Academic Excellence in Cyber Defense Education (CAE-CDE).

“One of the main goals of Gen Cyber is to increase the interest in cybersecurity, and hopefully many of those kids that are tech-savvy become ethical hackers,” he said.

 

]]>
161978
Center for Cybersecurity Receives $4.1 Million Grant for Scholarships https://now.fordham.edu/politics-and-society/center-for-cybersecurity-receives-4-1-million-grant-for-scholarships/ Tue, 25 Jan 2022 20:31:40 +0000 https://news.fordham.sitecare.pro/?p=156683 Fordham’s Center for Cybersecurity has secured a five-year grant from the National Science Foundation (NSF) that will enable the University to provide full scholarships to undergraduate and graduate students who wish to earn a degree in cybersecurity.

The grant, “CyberCorps Scholarship for Service: Preparing Future Cybersecurity Professionals with Data Science Expertise,” was announced by the foundation on Jan. 21. It is the largest grant the Center for Cybersecurity has ever received, and is also the largest grant for cybersecurity scholarships that is funded by the federal government, according to center founder and director Thaier Hayajneh, Ph.D. Hayajneh was the principal investigator for the grant, while Gary Weiss, Ph.D., professor of computer and information sciences, was the co-principal investigator.

Recognition of Skills and Quality

Thaier Hayajneh, Ph.D., professor of computer science,
Photo by Chris Taggart

“It’s a recognition that we have the ability to produce the graduates who have the skills and the qualities to serve the nation’s needs in cybersecurity,” said Hayajneh, a University Professor in the department of computer and information sciences.

Over the course of the five years, Hayajneh said, the fund is expected to cover the full tuition, health insurance, housing, and related expenses for 44 “student years.” Students can earn scholarships for up three years of study, so an undergraduate could use the funds to cover their third and fourth year of studies and their first year of master’s studies, or their final year of undergraduate studies and two years of a master’s degree. A student who enrolls in the department’s new doctoral program could have their entire three years of study covered by the grant.

A Commitment to Service

The grant is similar to one that the center was awarded in 2020 by the United States Department of Defense. Students who accept the scholarship make a commitment to work for at least two to three years for a federal agency such as the National Security Agency. This grant is larger and more flexible though, as it is not restricted to specific individuals chosen by a government agency.

Fordham is one of eight universities to join the CyberCorps Scholarship for Service program this year, which currently includes 82 universities representing 37 states, the District of Columbia, and Puerto Rico.

Key to the landing the grant, Hayajneh said, was a demonstration of both Fordham’s excellence in the field of cybersecurity and data science and the heartfelt desire of its students to serve the United States, rather than simply parlay their degree into a lucrative career in the private sector. In a 20 minute in-person meeting in November, Fordham was able to show that its graduates have that kind of commitment, and Joseph M. McShane, S.J., president of Fordham, assured grant administrators the same.

Data Science Expertise

Fordham brings to the table unique expertise in both cybersecurity and data science, Hayajneh said, through the department of computer and information sciences and the Gabelli School of Business. Students with experience in data mining and machine learning will be better equipped to work with computer systems that can predict, and not just respond, to security breaches.

“They will not only be able to detect attacks when they occur, but they will also have the ability to predict and prevent attacks before they even occur. This is very important because you want to stop the damage before it even starts,” he said.

“After you assure them that your program is ready, and you’ll have the best graduates in cybersecurity with some data science expertise, they want to make sure that you can contribute to another component, which is that you’ll assure their success rate,’ Hayajneh said.

The Scholarship for Service program has a 95% success rate of placing graduates in government jobs; Hayajneh said that the University was able to point to previous alumni who’ve gone into careers in government as proof of Fordham students’ commitment to service.

Ultimately, he said, the scholarships, which are open to students of all majors and schools in the fall of 2022, are an ideal way to get a foot in the door of a challenging but rewarding field.

Experience and Mentorship

“Even if you graduate from the best program in the world, it’s always hard to start that first cybersecurity job because of a lack of experience,” he said.

That’s because the stakes are so much higher in the field than in others. An inexperienced network administrator can accidentally delete old records, invade someone’s privacy, or give enemies control of their networks, he said.

“This opportunity will guarantee students the ability to work in one of the top agencies in the cybersecurity field,” he said.

Additional Academic Partners

On Jan. 24 Fordham received word of another academic partnership in cybersecurity education: The University was accepted into the United States Cyber Command Academic Engagement Network. The newly formed network brings together 84 universities and colleges and government institutions such as the Cyber National Mission Force, U.S. Fleet Cyber Command, U.S. Marine Corps Forces Cyberspace Command, and the U.S. Coast Guard Cyber Command. The partnership will focus on future workforce development, applied cyber research, analytic partnerships, and cyber strategic dialogue.

 

 

]]>
156683
Data Scientist Uses Algorithms to Promote Common Good https://now.fordham.edu/science/data-scientist-uses-algorithms-to-promote-common-good/ Fri, 10 Dec 2021 19:01:16 +0000 https://news.fordham.sitecare.pro/?p=155712 Ruhul Amin, Ph.D., wants to understand patterns all around us.

And with the aid of technology and data, he says, there is nothing that one can’t sort through. Want to know what the common themes are of 30,000 books? There’s an algorithm for that, and his team developed the methods to understand how syntax and themes influence a book’s success. Maybe you’d like to help a country better manage the way it responds to a pandemic? There’s an algorithm for that—and he’s used it in studies such as “Adjusted Dynamics of COVID-19 Pandemic due to Herd Immunity in Bangladesh.”

“I feel like, as a scientist, we all dream of impacting the actual lives of people. It’s not just that we will limit ourselves to theoretical contributions only. I figured that our work could reach the public by working side by side with the government, especially policymakers. This is how I thought it would be the best way to achieve a common good,” he said.

“I love data science because, with data science, you can work on so many diverse projects.”

Predicting COVID-19 Spikes

Amin, a native of Bangladesh who joined the department of computer and information science as an assistant professor in 2019, has been focusing his data analysis tools onto an array of areas, most recently the pandemic.

In “Adjusted Dynamics,” he and four collaborators examined data from the Bangladeshi government and created a new model that tries to predict how many people will become infected with COVID-19. A new model was needed because in Bangladesh, testing is prohibitively expensive, unlike in the United States, where it’s free. This means that Bangladeshi residents wait longer to get tested after initial exposure, and because COVID-19 can be spread by people who are not showing symptoms, they may be spreading it to others, causing the positivity rate to skew higher.

They started with SIRD (Susceptible-Infectious-Recovered-Deceased), a common statistical model, and modified it using an algorithm traditionally used in physics to predict the trajectory of objects in motion, called a Kalman Filter. For each of the country’s 64 provinces, they assigned color codes of green, yellow, and red, and plotted them on a timeline from May 2021 to May 2022. Ultimately, they were able to accurately predict 95% of the time where rates of COVID would rise and where they would fall. He shared the methodology with the Bangladeshi government, which instituted some of the recommendations regarding actions such as lockdowns.

Forecasting a Book’s Success

The computational research is extremely flexible and thus highly inter-disciplinary in nature, Amin said. When he learned that one of his graduate students had earned a bachelor’s degree in English literature, they teamed up together for a project that requires a deeper understanding of both linguistics and natural language processing (NLP). Using language features such as syntax, and the conceptual framework on which a piece of literature is based, they created NLP models to make predictions about a book’s success. The model was trained on the properties of other successful books to learn either their ranking on Goodreads or the number of times they’ve been downloaded.

In a similar study, “Stereotypical Gender Associations in Language Have Decreased Over Time” (Sociological Sciences, 2020), Amin used an automated process to scan a million English language books published between 1800 and 2000, and found that while stereotypical gender associations in language have decreased over time, career and science terms still demonstrate positive male gender bias, while family and arts terms still demonstrate negative male gender bias. He then further extended the work at Fordham to produce another research, “A Comparative Study of Language Dependent Gender Bias in the Online Newspapers of Conservative, Semi-conservative and Western Countries.

The success of studies such as these has made Amin confident that he can use the technique to examine documents to detect everything from political leanings to racial bias.

Finding Patterns in Mental Health Hotline Calls

Amin is also working in the area of mental health. In collaboration with NYU and the University of Toronto, he is analyzing five years’ worth of recorded phone conversations from a popular mental health “befriending” hotline in Bangladesh. The goal is to use past records to see if any patterns emerge that can be used for the future. This could be used by healthcare professionals to better tailor messages to the public or adjust staffing levels more efficiently.

“The interesting thing is what people really discuss during, let’s say, the weekend. Is it different from the weekdays? When do you get the most calls? Is it right after you post something where you say, ‘Hey, we’re a befriending service, we’re providing this kind of help?’ When do you get suicidal calls? You can literally change this area by using this modeling,” he said.

So long as there is enough data and computing power, Amin is optimistic that the possibilities for projects using algorithms are nearly endless. One of his projects, for instance, involves the analysis of a billion tweets on Twitter that tries to ascertain what constitutes offensive and biased language. Eventually, he hopes the data collected can be deployed the way Grammarly is used to clean up grammatical mistakes, but to help us identify blind spots in our perspectives.

“I actually published a paper in gender bias, and so I thought. ‘I’m a person without any biases.’ But when I took this psychological test recently, I found that I’m still male-biased,” he said.

“We’re coming from different backgrounds, and all have these kinds of stereotypes within us. So I want to develop a tool that can suggest to you how biased and how offensive the language is that you just wrote to any person or community.”

Even Fordham itself has the potential to be a good research project; Amin has his eyes set on the collections of the University’s library system. “We’re constantly conceptualizing the whole world,” he said. “Why not Fordham?”

 

]]>
155712
The Promise and Peril of Artificial Intelligence https://now.fordham.edu/politics-and-society/the-promise-and-peril-of-artificial-intelligence/ Thu, 30 Sep 2021 13:50:32 +0000 https://news.fordham.sitecare.pro/?p=153073

The concept of artificial intelligence has been with us since 1955, when a group of researchers first proposed a study of “the simulation of human intelligence processes by machines.” At the same time, it seems like not a day goes by without news about some development, making it feel very futuristic.

It’s also the purview of professors from a variety of fields at Fordham, such as Damian Lyons, Ph.D., a professor of computer science, R.P. Raghupathi Ph.D., a professor of information, technology and operations at Gabelli School of Business, and Lauri Goldkind, Ph.D., a professor at the Graduate School of Social Service.

Listen below:

Full transcript below:

Patrick Verel: Artificial intelligence is many things to many people, on the one hand, the concept has been with us since 1955 when a group of researchers first proposed a study of, “The simulation of human intelligence processes by machines.” At the same time, it seems like there isn’t a day that goes by without news of some new development, making it feel very futuristic. Need to call your pharmacy, a chat bot will answer the call, approaching another car on the highway while in cruise control, don’t worry your car will slow itself down before you plow into it. Just this month, the New York Times reported that an Iranian scientist was assassinated in November by an AI assisted robot with a machine gun.

Damian Lyons
Damian Lyons

Here at Fordham, Damian Lyons is a professor of computer science on the faculty of arts and sciences. R.P. Raghupathi is a professor of information, technology and operations at the Gabelli School of Business. And Lauri Goldkind is a professor at the Graduate School of Social Service. I’m Patrick Verel, and this is Fordham News. 

Dr. Lyons, you’ve been following this field for 40 years and have witnessed some real ebbs and flows in it, why is this time different?

Damian Lyons: Well, the public perception of artificial intelligence has had some real ebbs and flows over the years. And while it’s true that humanity has been trying to create human-like machines almost since we started telling stories about ourselves, many would trace the official birth of AI as a field, to a workshop that occurred at Dartmouth University in the summer of ’56. And it’s interesting that two of the scientists at that workshop had already developed an AI system that could reason symbolically, something which was supposed to be only doable by humans up until then. And while there was some successes with those efforts, by and large AI did not meet the enthusiastic predictions of its proponents, and that brought on what has often been called the AI winter, when its reputation fell dramatically. In the 70s, things started to rise a little bit again, AI began to focus on what are called strong methods. Those are methods that make use of the main specific information rather than general-purpose information to do the reasoning.

So the domain expertise of a human expert could be embodied in a computer program, and that was called an expert system. For example, the MYCIN expert system was able to diagnose blood infections as well as some experts and much better than most junior physicians. So expert systems became among the first commercially successful AI technologies. The AI logistics software that was used in the 1991 Gulf War in a single application was reported to have paid back all the money that the government spent funding AI up until this point. So once again, AI was in the news and they were riding high, but expert systems again, lost their luster in the public eye because of the narrow application possibilities and AI reputation once again deemed, not as bad as before, but it deemed once again. But in the background coming up to the present date, there were two technology trends that were brewing.

The first was the burgeoning availability of big data via the web and the second was the advent of multi-core technology. So both of these together set the scene for the emergence of the latest round in the development of AI, the so-called deep learning systems. So in 2012, a deep learning system, not only surpassed its competitor programs at the task of image recognition but also surpassed human experts at the task of image recognition. And similar techniques were used to build AI systems to defeat the most experienced human players at games such as Go and chess and to autonomously drive 10 million miles on public roads without serious accidents. So once again, predictions about the implications of AI are sky-high.

PV: Now, of all the recent advances, I understand one of the most significant of them is something called AlphaFold. Can you tell me why is it such a big deal?

DL: AlphaFold in my opinion, is a poster child for the use of AI. So biotechnology addresses issues such as cures for disease, for congenital conditions, and maybe even for aging, I’ve got my fingers crossed for that one. So proteins are molecular chains of amino acids, and they’re an essential tool in biotechnology, in trying to construct cures for diseases, congenital conditions, and so forth. And the 3D shape of a protein is closely related to its function, but it’s exceptionally difficult to predict, the combinatorics in predicting the shape are astronomical. So this problem has occupied human attention as a grand challenge in biology for almost 50 years, and up until now, it requires an extensive trial and error approach to lab work and some very expensive machinery in order to do this prediction of shape. But just this summer Google’s DeepMind produced the AlphaFold 2 AI program, and AlphaFold 2 can predict the 3D shape of proteins from their amino acid sequence with higher accuracy, much faster, and obviously much cheaper than experimental methods. This has been held in biology as a stunning breakthrough.

PV: R.P. and Lauri, do you have any thoughts on things that are unsung?

W.P. Raghupathi
W.P. Raghupathi

R.P. Raghupathi: I would just add medicine is a good example, the whole space of medicine, and like Damian mentioned with the image recognition is one of the most successful in radiology. Where now radiologists are able to spend more time at a high level, looking at exception cases that are unusual as opposed to processing thousands and thousands of images, doing the busywork. So that’s been taken out, with a great deal of success. So Neuralink is another example, I’m just excited that we can hopefully solve some of our brain problems, whether through accidents or Parkinson’s or Alzheimer’s with brain implants, chip implants, and that’s terrific progress. I mean, just more recently with drug discovery, extending what Damien said, vaccine development, drug development has accelerated with AI and machine learning. There’s of course, for me, the interest is also just quickly social and public policy and so Lauri will speak to that. I’m just looking at how even being data driven in our decision making in terms of the UN Sustainable Development Goals or poverty elevation or whatever, just looking at the data, analyzing it with AI and deep learning, give us more insight.

Lauri Goldkind: It’s funny R.P. I didn’t know that we were going to go in this direction in particular, but the UN has a research roadmap for a post-COVID world, which hopefully we’ll be in that world soon. But in this research roadmap, it talks a lot about using AI and it also talks about data interoperability and so data sharing at the country level in order to be both meet the sustainable development goals, but also to meet even possibly more pressing need. So pandemic recovery, cities recovering from natural disaster, and it definitely amplifies a need for data interoperability and deploying AI tools for these social good pieces and for using more evidence in policymaking. Because there’s the evidence and there’s advancements and then there’s the policymakers and building a bridge between those two components.

Lauri Goldkind
Lauri Goldkind

PV: Dr. Lyons, you mentioned this notion of talking about the advances for science or being a good thing and a positive thing. I know that there are also fears about AI that veer into the existential realm, on thinking of this notion that robots will become self-aware. And I’m gen X so of course, my frame of reference for everything is the Terminator movies and thinking about Skynet, which comes to life and endangers human existence, as we know it. But there’s also this idea within the field that the concept of silos will make that unlikely or not as likely as maybe people think. Can you explain it a little bit about that?

DL: Yeah, sure. That’s a very good point, Patrick. So games like chess and Go and so forth were an early target of AI applications because there’s an assumption there, there’s an assumption that a human who plays chess well must be intelligent and capable of impressive achievement in other avenues of life. As a matter of fact, you might even argue that the reason humans participate in these kind of games is to sharpen their strategic skills that they can then use to their profit and other commercial or military applications. However, when AI addresses chess, it does so by leveraging what I called previously, these strong methods, so they leverage domain expertise in chess. Despite its very impressive strategy at playing Go, the AlphaGo program from DeepMind, can’t automatically apply the same information to other fields. So for example, it couldn’t turn from playing, Go in the morning to running a multinational company effectively in the afternoon, as a human might, we learn skills which we can apply to other domains, that’s not the case with AI.

AI tools are siloed and I think an excellent warning case for all of us is IBM’s Watson. Where is Watson? Watson is a warning for hubris, I think in this regard, it has not remade the fortune of IBM or accomplished any of the great tasks foretold, they’ve tuned down their expectations, I believe in IBM and there are applications for which a technology such as Watson could be well used and profitable, but it was custom built for a quiz show, so it’s not going to do anything else very easily. AI tools and systems are still developed in domain silos, so I don’t believe that the sentient AI scenario is an imminent press. However, the domain-specific AI tools that we have developed could still be misused, so I believe the solution is educating the developers and designers of these systems to understand the social implications of the field. So we can ensure that the systems that are produced are safe and trustworthy and used in the public good.

PV: Dr. Raghupathi, now I know robots long ago replaced a lot of blue-collar jobs, I’m thinking for instance of car assembly lines, now I understand they’re coming for white-collar jobs as well. In 2019, for instance, a major multinational bank announced that as part of the plan to lay off 18,000 workers, it would turn to an army of robots as it were, what has changed?

RP: So I just go back to what Damien mentioned in the beginning. I mean, two trends have impacted organizations and businesses in general. So one is the rapid advances in hardware technologies, both storage as well as speed, so those have enabled us to do more complex and sophisticated things. And number two is the data, which also he mentioned, that all of a sudden corporations have found they’re sitting on mountains of data and they could actually use it with all this computing power. So those two trends confluence together to make it an ideal situation where companies are now using AI and other techniques to automate various processes. It is slow and we have a lot to learn because we don’t know how to handle displacement and layoffs and so on, so companies have started with basic robotic process automation, first automating routine and repetitive tasks. But we also see now more advanced work going on, like in the example you mentioned that banks, trading companies, hedge funds are using automated trading, algorithmic trading, that’s all machine learning and deep learning. So those are replacing traders.

PV: What kind of jobs do you think are going to be the most affected by AI going forward?

RP: Well, all at both ends, we know that the routine, for example, in a hospital admissions process or security checks or insurance crossing, all of those, any data-driven is already automated. And then from the prior examples, now you call your insurance company for good or bad, you’re going to go through this endless loop of automated voice recognition systems. Now the design of those is lacking quite a bit in terms of training them on different accents, they never understand my accent. So I just hit the zero button like five times and then I will have a human at the other end or I would say, blah, blah, blah and the system gets it and really it works.

Then we have now the more advanced, and so the financial trading is an example, but also in healthcare, the diagnosis, the diagnostic decision making like the example that was mentioned, reading MRI images and CT scan images and x-rays, that’s pretty advanced work by radiologists. And now the deep learning systems have taken over and they’re doing an excellent job and then the radiologists are there to supervise, keep an eye on outliers and exceptions for them.

PV: I’m glad to hear that I’m not the only one who, when I get an automated voice on the other end of the line that I just hit zero, just say, “Talk to a person, talk to a person, talk to a person.”

RP: Try blah, blah, blah, it works better, to cut to the chase.

LG: Even in my field in social work, automation, and chat is beginning to take over jobs. And so I’m working with a community partner, that’s using a chatbot as a coach for motivational interviewing, which is an evidence-based practice. And one of the challenges in evidence-based practices is how faithful the worker is to implementing the strategy of the practice. And we’re now seeing, instead of having a human coach to do technical assistance on implementing a particular practice, agencies are turning to chat because it’s efficient. So if I don’t have to pay a human coach, I can train three more workers using this chat strategy. And so we think in these highly professionalized settings that people have job security and job safety versus automation and that’s actually just not the case anymore.

PV: What implications do these advancements have for other countries?

DL: I think there are developed countries and undeveloped countries, one potential advantage that AI holds for the future is in my own area of research, which is the applications of AI and robotics. And that’s the area of what’s called precision agriculture, so the idea being that rather than spraying large areas with pesticides or covering areas with fertilizer, you use AI technology and the embodiment of AI technology in ground robots and robot drones to target specific areas, specific spatial areas. So that if you’ve got pests growing on a particular line of tomato plants or coffee plants, then you can target your pesticide to just those areas. You can even use mechanical means to pull up weeds just as people do rather than flying a plane overhead and spraying all kinds of nasty pesticides and other stuff which ruin the environment.

LG: I was thinking on the more positive side, the use of chat technologies in mental health and whole language processing in mental health and things like avatar therapy, in scenarios where there are no providers, the AI has a real possibility of benefit in order to serve people who might not otherwise be served. And so there’s a growing understanding that depression and social connection and wellbeing are interrelated and are mental health challenges that are certainly related to climate change and future work and all those other pieces. But one way to meet that growing mental health need is to use artificial intelligence to deliver services. And so on the positive side, I think there’s an opportunity to grow AI strategies in mental health.

RP: I think Patrick, some of these implications are not just for developing other countries, but even our country and the developed countries. I mean, take the retraining of the workforce that was alluded to, we don’t have any for even the transfer to clean technologies from the coal mines. I mean, what are those people going to do if we shut down the coal mines? Are we training them in the manufacture and use of advanced energy technologies? And likewise in the last election, there were some talk, Andrew Yang and others have had universal income, a lot of research is going on about it, the cost-benefit analysis, so some kind of safety net, some social policy as we handle this transition to an automated workforce is needed.

LG: I mean, let’s be really clear, the reason that Silicon Valley is interested in a universal basic income is because there’s a dramatic understanding about what the future of employment is going to look like. And as in the US is a global North country and we have a very strong ethos about work and a work identity. And when there are no jobs, it’s going to be really challenging even for traditional middle-class jobs to figure out their role with regard to working alongside AI.

PV: Now, Dr. Goldkind, this summer, you wrote a paper actually, and you said that social work must claim a place in the AI design and development, working to ensure that AI mechanisms are created, imagined and implemented to be congruent with ethical and just practice. Are you worried that your field is not as involved in decisions about AI development as it should be?

LG: I think that we have some catching up to do and I think that we have some deep thinking to do about how we can include content like AI and automated decision making and robotics and generalized intelligence versus specialized intelligence in AI into our curricula. And to Damien’s earlier point, I think that the same way that our engineering students should be trained with an ethical lens or minimally, a lens on who might be an end user of some of these tools and what those implications might be, that social work students and prospective social work professionals should also have a similar understanding of the consequences of AI use and AI mechanisms. And so I think that there’s a lot of room for growth in my discipline to catch up and to also be partners in how these systems are developed. Because social work is bringing this particular lens of an ecosystem model and a person in an environment approach and a respect for human dignity.

And by no means suggesting that a business student or a computer science student is not as un-respective of human dignity, but in social work, we have a set of core values that folks are opting into. And we are not, I think, preparing students to be critical about these issues and think deeply about the implications of when they’re seeing a client who’s been accessed by an AI or a robot, what are the tools and strategies we might use to help that person be synthesized back into their community in a way that’s meaningful, on one hand. And on the other hand in the AI world, there’s a huge conversation about fairness, accountability, and transparency, and ethics in AI, and social work has a code of ethics and has a long history of applying those codes. And so could be a real value add to the design and development process.

PV: Yeah. I feel like when we talked before this, you mentioned this idea of having graduates getting used to this idea of working alongside AI, not necessarily being replaced by it. Can you talk a little bit about that?

LG: Sure. I think the idea about AI augmentation rather than AI automation is whereas these pieces are evolving is where it seems to be headed. And I think it would be useful for us as social work educators to think about how are we helping our students become comfortable with an augmented practice that uses an AI in a positive light? And so, for example, in diagnosis, in the mental health world, AI can make a more accurate assessment than a human can, because the AI is built as to our peace point earlier about radiology, the AI is trained to do this one specific thing. And so similarly in mental health, it would be great if we were teaching students about how these tools can be deployed so they can work on higher-order decision making or alternative planning and strategies and use the AI in a complementary fashion as opposed to being just completely automated.

PV: I think about jobs, so much of this conversation revolves around jobs and oh, I’m going to lose my job to a robot. And in your field, it seems like that is never going to be the case because there’s such a huge demand for mental health services, that there’s no way the robots can physically replace all the people.

RP: Social services can be delivered, again, more effectively with now the AI, the technologies, but also the data-driven approaches. I mean, every agency is swamped with cases and workloads, sometimes it’s taking years to resolve whether it’s placing a child in a foster home or whatever. So I think these technologies will help process the data faster and more effectively and give that information, the insight to the counselors, to the managers, to the caseworkers. And so they could spend more time dealing with the high-level issues than with paper pushing or processing data, so there is really great benefit over there, again, to at least automate some of the routine and repetitive parts.

LG: Oh, absolutely. And also in terms of automated decision making and even operations research and bringing some of those strategies from predictive analytics and exploratory data analysis into mental health providers, or community health providers and other providers of human services. Where we could deploy resources in a really strategic way that the agencies don’t have the capacity to do in human decision making and AI or a good algorithm can make really efficient use of this data that people are already collecting.

DL: I just want to chime in on that. That’s such an interesting discussion and I guess I feel a little out of place because I’m going to say something I normally don’t say, which is that now you’re making me very worried about the application of AI. So we already know that there are lots of issues in the way people develop AI systems, engineers or computer scientists developing the systems don’t always take a great deal of care to ensure that their data is necessarily well-curated or represented from a public good perspective. But now if we’re going to use those systems to help to counsel, to interact with vulnerable humans, then there’s a tremendous opportunity for misuse, corruption, accidental mistake. So I’m a little worried. I think we have to be really careful if we do something like that, and I’m not saying that there isn’t an opportunity there, but I’m saying that that’s a case where the implications of the use of AI are pretty dramatic even with the current state of AI. So we probably want to be very careful how we do that.

LG: In a perfect world, I would have my social work students cross-trained with your CS students, because I do think that there’s a real value to having those interdisciplinary conversations where people become aware of unintended consequences, or possible biases that can be embedded in data and what that means for a particular application. But I also want to just note that the same way the universal basic income has been discussed as a bomb for future work type issues, predictive analytics, and automated decision making is in place in social services. And so it’s being used and not even tested, but really used in triaging cases in child welfare, as one could imagine, not without controversy. Allegheny County is probably the most developed county there in Pennsylvania, who’ve deployed automated decision-making to help triage cases of child welfare abuse and neglect. And it’s really another case of decision-making to support human workers, not supplanting human workers.

PV: Have any specific innovations in the field made you optimistic?

DL: Can you define what you mean by optimistic? So for example, if sentient AI was developed tomorrow, I’d be over the moon, I would think this would be great, but I would think that other people would say that this was the worst thing that could happen. So maybe you need to be a little more specific about what optimism means in this case.

PV: I guess the way I’m thinking about it is, when you think about the advances that we’ve made so far, and you see where things are going, in general, what do you feel is going to be the most positive thing we’ll be seeing?

RP: Medicine is I think one area, I mean, just so fascinating, the fact that we can give back people some of their lives in terms of Parkinson’s or Alzheimer’s as a result of wars and strokes. And then combined with what Damien said about the biological aspect, decoding proteins, et cetera, it’s just, so drug discovery of solving health and medical problems, I think is one area, it’s just outstanding and then stunning, I would continue to follow that one.

LG: I also think in robotics specifically, which is underneath the broad umbrella of AI, there’s some real advances in caregiving. And I think that that has broad application as we’re an aging society and not just in the US, but internationally with not enough caregivers to offer support and daily living skills and daily living support to older adults in facilities and out, and keeping people in their homes. There’s so many advances to support independent living for older persons that will be automated, from caregiving robots, to smart homes and internet of things advances, that use the big data we’ve been talking about to help support somebody be independent in a community. And I think that those pieces show significant promise in a way that humans won’t be able to catch up fast enough.

RP: I must add to that. I mean, I’ve been following the remote monitoring of senior citizens experiments done in various countries. We are a little behind, but Japan has been just so way ahead, 20 years ahead, that once a picture of this wonderful old lady, 85 years old sitting in a bathing machine, like a washing machine, and she was going through all the cycles and the article stopped it when she got into the spin cycle, you probably need an attendant to switch it off.

DL: One of the things that does make me feel good about the progress of AI in societies, that there’s been already attention to understanding the restrictions that need to be placed in AI. For example, winding back to one of the very first examples you gave in this talk, Patrick, lethal autonomous weapons. So there’s been a number of attempts and conferences and meetings to understand how we’re going to deal with this issue of legal autonomous weapons. There have been organizations such as the Future of Life and its objective is to understand how technologies such as AI, which present an existential threat to human lives could be dealt with and could be used effectively, but constrained enough, and early enough, constrained early enough, that it was useful.

So with AI, I think we’re at that point, we can talk about trying to get folks to sign on to a lethal autonomous weapons pledge, which the Future of Life organization is trying to do. Or at least understand what the issues involved are and ensure that everybody understands that now before the lethal autonomous weapons are really at a serious stage, where we can no longer control the genie, it’s out of the bottle at that point. So that’s something that makes me feel optimistic.

]]>
153073
Supporting Women in Tech: Five Questions with Gianna Migliorisi https://now.fordham.edu/fordham-magazine/supporting-women-in-tech-five-questions-with-gianna-migliorisi/ Tue, 04 Jun 2019 20:04:49 +0000 https://news.fordham.sitecare.pro/?p=121184

Gianna Migliorisi has worked in tech for more than a decade, but until last spring, she didn’t realize just how unwelcoming the industry could be for women.

“My entire career I was walking around, oblivious, thinking that I was no different from any of my male colleagues, that every other woman in technology was treated with the same respect and equality that I had been fortunate enough to encounter in the workplace,” she wrote in a post on Medium.

Her epiphany came at the 2018 Women of Silicon Valley conference in San Francisco, where she heard stories of female software engineers who had to work harder than their male counterparts in order to gain approval, or sometimes, even to get in the door. According to the National Center for Women & Information Technology, only 26% of professional computing jobs in the 2018 U.S. workforce were held by women, and only 20% of Fortune 500 chief information officer (CIO) positions were held by women in 2018.

The conference was such an eye-opener for her, she says, because she has always felt supported in her academic and career choices.

“I didn’t really appreciate how important it is for women in a science field to be recognized, because there are not many of us,” she says.

The Brooklyn native not only grew up with parents who both worked in the sciences—her mother is a scientist who taught anatomy to medical and nursing students, and her father is a pharmacist—but she also received a great deal of encouragement from faculty at Fordham.

During her sophomore year, computer science professor Robert Moniot, Ph.D., nominated her for a Clare Boothe Luce Scholarship for women in the sciences. The award gave her the financial support to enroll in Fordham’s dual-degree program in computer science. She began taking graduate-level courses as an undergraduate, and earned her master’s degree in 2008.

While finishing her master’s, Migliorisi began working at National Grid, the utilities company. She later joined HBO, where she was part of the team that launched the HBO Go app, and worked at a software company before joining Discovery Inc. in August 2015. As a senior director of technical product management, she works with engineers to build features and products for the company’s streaming apps, including those for TLC and Animal Planet.

Her professional success, and the experience she had at the Women in Silicon Valley Conference, has led Migliorisi to try to make sure she creates an environment in which other women can succeed.

“I’ve been making a conscious effort to try to be more supportive of [my women colleagues’]particular struggles,” she says. “I definitely make it more of a priority now to hire more women and make sure I look around to make sure other people are hiring more women.”

Migliorisi knows she was fortunate to find at Fordham an environment where she felt supported and could develop her skills and confidence.

“[My professors] never discouraged me from anything and never made me feel like I wasn’t capable of doing this job or learning,” she says. “They were super helpful, especially when you needed that extra effort, and they had a genuine interest in your success. I had a really, really good experience.”

Beyond academics, Migliorisi was a member of the Commuting Students Association, an orientation leader and orientation coordinator, and a member of the Senior Week Committee.

“As a commuter, I wanted to feel like I had a connection to my school and make sure that other commuters had that connection, too,” she says. “Fordham did a great job of catering to commuting students and making resources and activities available for them to be a part of.”

That positive experience has led Migliorisi to stay involved with Fordham however she can, from donating to attending events.

“Really, I had such a wonderful experience there that I definitely believe in giving back to a place that I feel like shaped me as a person.”

Fordham Five

What are you most passionate about?

This is hard because I get excited about a lot of things … but I feel like I’m most passionate about making others happy. I bake a lot, which relieves stress for me, but I bake things and bring them to work because it makes everyone so happy. Little things like that. Saying thank you for something small, buying someone some flowers to cheer them up … giving hugs … organizing happy hours. Everyone works really hard, and I like to make sure they know they’re appreciated, so it makes me happy to make others happy.

What’s the best piece of advice you’ve ever received?

“No one wants to mess with something that’s working.” My manager always reminds me of that when there is a lot of change going on in the workplace, and when certain changes can lead to uncertainty. Change isn’t easy, and when the future is uncertain, it makes it harder sometimes to concentrate and do your job. Remembering to just do your best and keep focusing on your mission will help you navigate the waters of change, and most of the time, bad change won’t come your way if things are going in the right direction.

What’s your favorite place in New York City? In the world?

How do you pick one place in New York City? I think anywhere there’s a spot of green in NYC is my favorite place. There’s nothing like hanging out at Bryant Park on a nice summer afternoon. In the world: Anywhere where there’s a beach with nice warm water is my happy place.

Name a book that has had a lasting influence on you.

Extreme Ownership: How U.S. Navy SEALs Lead and Win, by Jocko Willink and Leif Babin, has had a huge influence on me, particularly as a leader in a work environment. It teaches you to take ownership of everything, including the mistakes of a team. If you’re a leader, and your team is underperforming, it’s not their fault, it’s yours. You as a leader, no matter what situation you are in, have an obligation to the people you lead—to build trust, encourage, and inspire them. If someone on your team fails, it’s because you failed in some way. Never misplace the blame; always own your mistakes.

Who is the Fordham grad or professor you admire most?

Professor Stuart Sherman in the English department. I absolutely hated English classes, and English professors didn’t like me that much. I was never very good at analyzing things from a creative perspective (I’m a logical thinker) and my writing wasn’t amazing. Professor Sherman took the time to help me be a better writer. He taught his courses with so much passion and love and enthusiasm, it was infectious. He made me love a course I absolutely hated, and in my mind, that is the mark of an amazing teacher. I may not remember everything I learned in his classes, but I remember him for his energy and his kind heart and his love for teaching.

]]>
121184
Students Use Gaming Technology to Track Endangered Toads https://now.fordham.edu/science/students-use-gaming-technology-to-track-endangered-toads/ Wed, 29 May 2019 13:58:53 +0000 https://news.fordham.sitecare.pro/?p=120625 Over the past several years, three cohorts of Fordham students have worked with zoologists from the Bronx Zoo and Professor Damian Lyons, Ph.D., of the Department of Computer and Information Science on a project with roots in Africa. They set out to observe the movements of endangered Kihansi spray toads by using camera tracking technology originally associated with gaming.

Now, student-developed software that works with the camera technology promises to help conservationists better understand how to protect future generations of the toads so that they can continue to thrive in their natural habitat in Tanzania. Next year, two more students will pick up the project.

From Africa to the Bronx, and Back Again

Discovered in 1996, the Kihansi spray toad lived in a five-acre microhabitat created by the spray of waterfalls in the Kihansi Gorge, which came under threat with the construction of a nearby dam that dramatically changed the habitat and decreased the size of the mist zone. The species was last seen in the wild in 2005 and was declared extinct-in-the-wild by 2009 by the International Union for Conservation of Nature, likely due to the environmental changes and the emergence of a deadly fungus.

As the toad population declined, a partnership between the Bronx Zoo and the Tanzanian government, and the World Bank facilitated the collection of 499 spray toads to be brought to the Bronx to initiate an off-site conservation program. Custom microhabitats replicating their home in Tanzania were built in bio-secure facilities at the Bronx Zoo and later at the Toledo Zoo where they successfully bred the toads in the hopes of reintroducing them to the wild.

Back in Tanzania, the government managed the Lower Kihansi Environment Management Project to create a gravity-fed misting system. The project resuscitated the toads’ habitat and in 2010 the first 200 toads were returned to Tanzania to a breeding facility at University of Dar es Salaam. The first of several reintroductions to the gorge occurred in 2013, making them the first amphibian species to be reintroduced after being declared extinct in the wild.

From left, rising junior Douglas Lampone, rising senior Michael Wieck-Sosa, recent FCRH graduate Philip Bal, the Bronx Zoo’s Avi Shuter, and Professor Damian Lyons pose behind the scenes at the Bronx Zoo.

Enter Fordham

The Fordham piece of the project began about five years ago when Kelly Cunningham, FCRH ’14, worked with James MacDonall, Ph.D., professor emeritus of psychology, to study the pecking behavior of pigeons. At the time, contact switches and touch-screen sensors were the state of the art for recording pigeons pecking at a target as part of psychological learning experiments, but a disadvantage of that simple mechanism is that when the pigeons’ beaks began to hurt, they stopped pecking at the switches. Further complicating things was the fact that this technology missed when pigeons were distracted or facing the wrong way, said Lyons.

As a computer scientist under the tutelage of Lyons, Cunningham worked in Fordham’s Computer Vision Lab to institute the use of the Microsoft Kinect sensor for the study. The Kinect is a motion-sensing input system initially developed for Xbox. Its cameras presented a flexible and inexpensive image-based approach to solving the tired-beak problem, as well as a way to observe behaviors beyond pecking.

Lyons and Cunningham wrote a paper published in a Psychometric Society journal in 2014 on their findings, which caught the eye of Avi Shuter at the Wildlife Conservation Society’s Bronx Zoo. Shuter is the Senior Wild Animal Keeper in the zoo’s Department of Herpetology.

He was researching the behavior of the Kihansi spray toad, and he thought the technology might be helpful in the zoo’s efforts to better understand the animal. He reached out to Lyons, who in turn put Armando Califano, FCRH ’17, GSAS ’19, on the case.

Taking the Toads to Task

With the help of an undergraduate research grant from Fordham College at Rose Hill, Califano refined the tracking system developed by Cunningham, shifting the camera from Microsoft Kinect to the Intel RealSense, which had more accurate depth perception. But Califano could only take the project so far before entering graduate school, and the experiment was put on hold.  That’s when Philip Bal, FCRH ’19, came into the picture.

In Bal’s junior year he decided to shift his focus from biology to computer science—making him a perfect candidate to pick up the project.  Over the past year and a half, Bal wrote new software that would use the camera to track the toads and generate behavior analytics, ultimately by distinguishing toads from other moving and stationary elements in their tanks. With Lyons overseeing the computer technology and Shuter overseeing the biology, Bal was able to further develop software that responded to the needs of zookeepers.

A Tiny Target

The average size of the toad is no more than an inch, at most. The tanks that they are kept in are the typical fish tank size, about two feet wide, two feet deep, and about three feet high. The camera sits an inch and a half away from the glass. Researchers choose a subsection of the tank to focus on, just a few dozen cubed inches along the bottom or the top. A focal length is established to determine how deep into the tank the camera will take measurements. The camera has two lenses: one that’s recording color, and infrared that records movement.

“We have to do a whole bunch of calculations, try to figure what’s actually a toad moving and eliminate the noise, like moving leaves,” said Bal. “The first thing we do to track toads is to match them to a particular movement.”

Lampone and Wieck-Sosa, pictured here getting their first glimpse of the spray toads, will be the fourth cohort of students to take over development of the tracking software.

Providing a More Accurate Picture

The group gathered approximately two days of footage that took up four and a half terabytes of stored data. Up until then, previous behavioral studies relied on direct observations of toads by scientists at predetermined time intervals. Those projects were an important start, but this new technology and software will give researchers a more complete view of toad behavior, said Shuter.

“Previous studies almost didn’t see any toads hopping,” said Shuter, who worked with the Fordham students and Lyons. “This can be a pretty shy species of toad that hides or stays still when you walk by. A lot of their behavioral repertoire also seems to be made up of split-second movements, like quick calls or hops. So, that’s part of the reason why I thought that a system where a computer could catch all that would give us a more accurate idea of what’s going on.”

One of the things that distinguish the Fordham research from other studies on these toads was that the technology and software were new.

“This is from the ground up; it didn’t exist before,” said Shuter, adding that as a result, the project is more complex than previous studies. “I’m amazed that it has only taken this long to get to where we’ve gotten since it’s totally from scratch.”

Bal said that the project taught him quite a bit about programming.

“I learned what I was capable of, I created thousands of lines of code I never thought I would be able to write,” he said. “This is one of my favorite things to talk about, my passion project.”

Shuter said that when the zoo first recovered the toads in the year 2000, the focus at the time was to build up a colony in captivity that could be relied upon in the event that the wild population continued to decline. The zoo was able to bring the number of toads to almost 2,000 toads.

“The struggle back then was to make more, make more, make more, and we didn’t publish research about their natural history or biology, aside from what would keep them alive, healthy, and breeding in zoos,” said Shuter. “Now, we’re a little bit calmer and things are going well in Tanzania, and we have a good handle on how to keep them alive. So now, we’re starting to look more into, ‘what’s their behavior like?’”

At a recent meeting at the zoo, Bal presented some interactions he observed in the data, including “meetings” of toads, characterized by a certain distance between the toads and the amount of time spent together.

Shuter plans to continue observing these interactions, and also plans to examine fighting behaviors and look to tell them apart from mating—also referred to as amplexing.

And he may get some help from the next cohort of Lyons’ students.

“These guys might end up doing some track analysis for that,” said Lyons, gesturing to two younger students in the lab. “That’s great! We might be able to distinguish fights from amplexing.”

]]>
120625
Fordham Secures State Grant for Information Technology Lab Upgrades https://now.fordham.edu/university-news/fordham-secures-state-grant-for-information-technology-lab-upgrades/ Mon, 13 May 2019 15:52:18 +0000 https://news.fordham.sitecare.pro/?p=120172 In spite of shrinking government funding for higher education, Fordham’s office of Government Relations and Urban Affairs has successfully secured $2.5 million in state and municipal capital grant funding from the New York State legislature for the renovation of University’s computer science laboratories in John Mulcahy Hall.

The grant, which is administered by the Dormitory Authority of the State of New York, was the result of bi-partisan support of the State’s assembly and senate. Lesley A. Massiah-Arthur, associate vice president of government relations and urban affairs at Fordham, explained that the University secured the support of legislative leadership, including former New York senators Jeffrey Klein and Martin Golden, as well as the Assembly’s Bronx delegation, which is led by Assemblyman Jeffrey Dinowitz.

After Klein and Golden were defeated in elections in 2018, Government Relations worked with the staffs of New York State Assembly Speaker Carl Heastie, former State Senate Majority Leader John Flanagan, current State Senate Majority Leader Andrea Stewart-Cousins and Governor Andrew Cuomo to move the project through the legislative and agency processes toward completion.

Massiah-Arthur said the grant, which will reimburse Fordham for recently completed work to refurbish facilities on the Rose Hill campus, is proof that New York lawmakers see Fordham as a an institution that serves the public at large.

“While this project represents Fordham’s vision to provide its students and faculty with the resources they need for cutting-edge scientific discovery,” she said, “ultimately, the pursuit of this grant is representative of the University’s commitment to creating a vibrant science community at Fordham and within the region,” she said.

The University has worked in tandem with government before, securing public funds for the construction of new residence halls at Rose Hill, as well as its regional parking facility.

Massiah-Arthur said that when it comes to the sciences, Fordham can point to programs such as STEP (Science and Technology Entry Program for Students in Grades 7 to 12) and CSTEP (Collegiate Science and Technology Entry Program) as examples of the University reaching beyond the campus. STEP is an enrichment program for underrepresented minority and economically disadvantaged students in junior high and high school, while CSTEP is a statewide undergraduate scholars program designed to prepare minority and economically disadvantaged students for careers in the STEM, health and law fields.  Government Relations also secured the support of its legislative delegations to successfully restore the proposed budget cuts to these programs.

The renovation of the third floor of John Mulcahy Hall was motivated by a dramatic growth in the STEM field at Fordham. Over the past decade, undergraduate enrollment in STEM has more than doubled, and graduate enrollment in STEM has septupled.

“At the end of the day, this shows not only that private colleges and universities contribute to the public good, but that we’ve worked hard at building our relationship with our legislators, staff, and the agencies,” Massiah-Arthur said.

]]>
120172
Esteemed Cryptologist Encourages Ethical Thinking in the Nuclear Age https://now.fordham.edu/politics-and-society/esteemed-cryptologist-encourages-ethical-thinking-in-the-nuclear-age/ Mon, 29 Oct 2018 21:44:50 +0000 https://news.fordham.sitecare.pro/?p=107628 It’s easy to slip up and make decisions that border on unethical, but now more than ever, we need to be vigilant against such mistakes, said Martin Hellman, Ph.D. professor emeritus of electrical engineering at Stanford University, at a lecture at Fordham’s Rose Hill campus.

In an Oct. 22 talk titled “Challenges in Making Ethical Decisions: A Personal Journey,” Hellman, a 2015 Turing Award Winner, combined deeply personal anecdotes with highlights from his career at the forefront of cryptography. He shared stories from A New Map for Relationships: Creating True Love at Home & Peace on the Planet (New Map Publishing, 2016), which he co-wrote with his wife Dorothie, and offered five suggestions for ethical behavior.

If you want to behave ethically, stay vigilant, because it’s easy to fool yourself into thinking you’ll never stray, he said. Don’t underestimate the value of outside advice. Actively work at dealing with anger you feel for others, because it’s better to have friends than enemies. Make ethics a daily concern so you’re not caught off guard when you need to make a big decision. And remember that ethical standards change over time.

 Martin Hellman speaks to a group of students at the Rose Hill campus
If you want to behave ethically, stay vigilant, because it’s easy to fool yourself into thinking you’ll never stray, Hellman said.

Noting that his talk was sponsored by the Department of Computer and Information Science, (CIS) Hellman acknowledged that the rise of artificial intelligence and drone warfare made ethics particularly relevant to computer science students. But all students, he said, should be thinking about the subject.

“Whether you’re an English major or a CIS major, making more ethical decisions is becoming critically important, because we live in a nuclear age, an age of drone warfare. If we can’t learn to make ethical decisions, we will not survive as a civilization,” he said.

He was particularly passionate about the grave risks posed by nuclear weapons, of which he noted there are still tens of thousands in existence. There have also been more close calls than are generally acknowledged, he said, including a recent incident when Turkey shot down a Russian jet three years ago.

A Personal Mea Culpa

For evidence of how one can go down the wrong path ethically, he said, audience members need look no further than the man standing before them.

“I think the best lessons are the ones we can get from real life, especially when the speaker can say ‘Mea culpa. I made a decision on unethical grounds,’” he said.

In 1975, he and Whitfield Diffie, Ph.D., posited that because technology was advancing by leaps and bounds, the National Data Encryption Standard (DES), which had been the basis for computer security for 25 years, would be compromised within 15 years. When they discovered that this flaw was deliberately left alone so as to allow the National Security Agency to hack computers, they decided to go public, against the entreaties of NSA employees.

“My intellect was telling me that going public with this was the ethical decision, because the NSA should not single-handedly decide what the security level should be. They’re an interested party,” he said.

Then he realized that regardless of whether going public was right or wrong, doing so would make him famous. “Run with it!” he thought.

“Now, you would not want to potentially cause great harm to national security just to be famous. That is not an ethical decision,” he said.

Watch Out for Shadow Motivations

They did go public, and six years later, Hellman said a documentary made him realize that it was the right decision, but for the wrong reasons. In the film, scientists said they were motivated to work on the first atomic bomb so as to defeat Adolf Hitler, who was also pursuing nuclear weapons. But when asked why they kept working on it even after Germany was defeated in World War II, they couldn’t answer. Hellman said he thinks he knows why.

“We all have socially acceptable motivations for doing things, and those we might admit consciously. But we also have, very often, socially unacceptable motivations, like, ‘Run with it, even if it hurts national security!,’” he said.

Hellman dubbed these “shadow” motivations, which often permeate our subconscious.  Scientists, engineers, and mathematicians, he said, are people whose identities are often closely associated with how smart they perceive themselves to be. In the case of the atomic bomb, he said, the scientists might have been wondering, “Is my brain powerful enough to destroy a city?”

“You never want to admit that you killed 200,000 people just to see if your brain was that powerful,” he said.

Hellman encouraged students to remember that 200 years ago, Thomas Jefferson would have considered himself to be a highly ethical person, despite the fact that he owned slaves. In hindsight, it’s easy to see today how that’s an unethical behavior, he said. But it’s much harder to see ethical blind spots we suffer from today.

“What makes us think there isn’t something today that we shouldn’t be working on?,” he said.

“Being ethical in the world in a nuclear age is not an option, it’s a necessity.”

Martin Hellman and members of the department of computer and information science
Hellman spoke at the department of Computer and Information Science’s annual distinguished lecture, which was created by Habib M. Ammari, Ph.D., associate professor of computer and information science.

]]>
107628