Computer and Information Science – Fordham Now https://now.fordham.edu The official news site for Fordham University. Wed, 08 Jan 2025 18:18:39 +0000 en-US hourly 1 https://now.fordham.edu/wp-content/uploads/2015/01/favicon.png Computer and Information Science – Fordham Now https://now.fordham.edu 32 32 232360065 Lead Testing Efforts May Be Missing Kids in High-Risk NYC Neighborhoods, Study Says https://now.fordham.edu/science-and-technology/lead-testing-efforts-may-be-missing-kids-in-high-risk-nyc-neighborhoods-study-says/ Thu, 14 Nov 2024 16:21:21 +0000 https://now.fordham.edu/?p=196585 Seeking to use machine learning to advance the public good, a Fordham graduate student applied it to the data on blood tests for lead given to New York City children—and found a testing shortfall in some high-risk neighborhoods.

The study published last month in the Journal of Urban Health shows that the child populations in some neighborhoods are not being tested as completely as they should be, said Khalifa Afane, a student in the M.S. program in data science who wrote the study with his advisor, Juntao Chen, Ph.D., an assistant professor in the computer and information science department.

For the study, they used the city’s publicly available lead testing data, which he said “nobody has analyzed before” at the neighborhood level.

A Toxic Heavy Metal

Lead is a toxic heavy metal that can cause learning disabilities and behavior problems. Children pick it up from lead-based paint or contaminated dust, soil, and water. Lead exposure risk “remains persistent” among vulnerable groups including low-income and non-Hispanic Black children, the study says.

Khalifa Afane
Khalifa Afane with his research poster the Graduate School of Arts and Sciences Research Day last spring.

The city promotes blood lead level testing and awareness of lead poisoning in high-risk communities through a variety of educational efforts and partnerships.

But some high-risk neighborhoods still don’t get enough testing, Afane said.  A case in point is Greenpoint in Brooklyn vs. South Beach in Staten Island. The study says that despite similar numbers of children and similar rates of lead testing, Greenpoint has consistently averaged eight times more cases—97 out of 3,760 tests conducted in 2021, compared to just 12 in South Beach that year (out of 3,720 tests).

There should actually be more testing of children in Greenpoint, Afane said, because their risk is clearly higher. While testing efforts have expanded in the city, he said, “it matters much more where these extra tests were actually conducted,” since lead is more prevalent in some neighborhoods than in others, he said.

More than 400 Cases May Have Been Missed

For the study, he analyzed test result data from 2005 to 2021, focusing on children under 6 years old who were found to have blood lead levels of 5 micrograms per deciliter. Afane applied a machine learning algorithm to the testing data and projected that another 410 children with elevated blood lead levels might be identified per year citywide, mostly in vulnerable areas, by expanding testing in neighborhoods that tend to have higher case rates.

The highest-risk neighborhoods are in Brooklyn, Queens, and the north shore of Staten Island, and average about 12 cases per 1,000 tests, compared to less than four in low-risk neighborhoods, Afane said.

The city helps coordinate care for children with elevated levels and also works to reduce lead hazards. Since 2005, the number of New York City children under 6 years old with elevated blood lead levels has dropped 93%, a city report says.

Using a Data-Informed Strategy

But the study recommends a better, data-informed, strategy to focus more lead testing on high-need areas. “What we wanted to highlight here is that this needs to be done and reported at the neighborhood level, not at the city level,” Afane said.

The study also recommends awareness campaigns in high-risk areas emphasizing early detection, and it calls on local authorities to step up monitoring of water quality and blood lead levels in pregnant women.

“Our main goal was to use data science and machine learning tools to genuinely improve the city,” Afane said. “Data analysis is a powerful skill that could be used much more often to make a positive impact in our communities.”

]]>
196585
Faculty Lauded for Research on Working Families, Nanotech, Environmental Justice, and More https://now.fordham.edu/university-news/faculty-lauded-for-research-on-working-families-nanotech-environmental-justice-and-more/ Wed, 24 Apr 2024 08:35:00 +0000 https://now.fordham.edu/?p=188967 Fordham honored five distinguished professors at an April 22 ceremony that celebrated the impact of faculty research and its potential for solving urgent problems facing humanity.

Dozens of faculty, staff, and students gathered at the Walsh Family Library on the Rose Hill campus for the annual Research Day Celebration. In opening remarks, Fordham’s president, Tania Tetlow, invoked several current issues—threats to democracy, artificial intelligence, climate change—in emphasizing the importance of “every insight” that Fordham faculty produce.

“There is so much that you achieve, on behalf of Fordham and on behalf of the world,” she said. “You matter in everything that you do, and you matter even more when you come together across disciplinary silos, when you think about how we can solve problems in ways that will never come from any one discipline and never come from any one way of thinking about the world.”

Fordham’s chief research officer, George Hong, Ph.D., noted a “remarkable achievement” in his introductory remarks: Since the academic year began last July, Fordham has received $34 million in external grant awards, its greatest-ever yearly total and a 40% increase over the amount received by this time last year.

Awards for Distinguished Research

The professors each received a distinguished research award in one of five categories and gave brief remarks. The humanities award went to history professor Kirsten Swinth, Ph.D., for her studies of working families originally inspired by the “mommy wars” in 2004. “I couldn’t believe that it was the 21st century and people were still arguing passionately about whether mothers should be employed,” she said, adding later that she has strived “to illuminate and change the conversation about work and family among scholars and the wider public.”

Christopher Koenigsmann, Ph.D., associate professor of chemistry, received the sciences and mathematics award for his nanotechnology research that’s applicable to renewable energy, biomedical sensors, or technology that scrubs viruses out of indoor air. He credited the undergraduate students who helped with his research. “As they’re learning physical chemistry, they’re also solving problems—real problems—for society,” he said.

President Tetlow giving opening remarks
President Tetlow giving opening remarks

Jie Ren, Ph.D., associate professor of information systems in the Gabelli School of Business, received the interdisciplinary studies award for her work on collective online behavior and its impacts across business, social media, and other areas. “Throughout many years of studying this topic, I realized one thing, which is individuals in the crowd need each other to be better,” she said.

The Distinguished Research Award for Junior Faculty went to Mohamed Rahouti, Ph.D., assistant professor in the computer and information science department, for his cybersecurity innovations that draw upon blockchain technology and artificial intelligence. Receiving the award, he said, “inspires me to further my research with even greater dedication and passion.”

The social sciences award went to economics professor Marc Conte, Ph.D., for his work in environmental economics and environmental justice, some of which was cited in the Biden administration’s Economic Report of the President in 2023. “I look forward to continuing my work … in the hope of guiding us toward a more stable and equitable world,” he said.

Can ChatGPT Think?

The event also included presentations by Fordham’s IBM research fellows and interns, and by participants in the University’s Faculty Research Abroad Program. The keynote speaker was David Chalmers, professor of philosophy at New York University, who gave a talk titled “Can ChatGPT Think?”

“It probably doesn’t yet have humanlike thought,” he said toward the end of his talk, “but I think it’s also well on its way.”

NYU philosophy professor David Chalmers giving the keynote address
NYU philosophy professor David Chalmers giving the keynote address
]]>
188967
Just Like Humans, AI Has Biases. Two Fordham Professors Received Nearly $500K to Study Them. https://now.fordham.edu/science/just-like-humans-ai-has-biases-two-fordham-professors-received-nearly-500k-to-study-them/ Wed, 28 Feb 2024 14:36:37 +0000 https://news.fordham.sitecare.pro/?p=182413 Ruhul Amin, Ph.D., and Mohamed Rahouti, Ph.D., assistant professors of computer and information science at Fordham, were awarded a $493,000 grant from the Qatar Research, Development and Innovation Council to study and improve the biases of artificial intelligence. 

“The main idea is to identify and understand the different types of biases in these large language models, and the best example is ChatGPT,” said Rahouti. “Our lives are becoming very dependent on [artificial intelligence]. It’s important that we enforce the concept of responsible AI.” 

Like humans, large language models like ChatGPT have their own biases, inherited from the content they source information from—newspapers, novels, books, and other published materials written by humans who, often unintentionally, include their own biases in their work. 

In their research project, “Ethical and Safety Analysis of Foundational AI Models,” Amin and Rahouti aim to better understand the different types of biases in large language models, focusing on biases against people in the Middle East. 

“There are different types of bias: gender, culture, religion, etc., so we need to have clear definitions for what we mean by bias. Next, we need to measure those biases with mathematical modeling. Finally, the third component is real-world application. We need to adapt these measurements and definitions to the Middle Eastern [population],” said Rahouti. 

Starting this April, Amin and Rahouti will work on the project with researchers and graduate students from Hamad Bin Khalifa University and Qatar University, both located in Qatar. Among the scholars are three Fordham students: a master’s student in data science, a master’s student in computer science, and an incoming Ph.D. student in computer science. The grant funding will partially support these students. 

This research project is funded by a Qatar-based organization that aims to develop Qatar’s research and development, said Amin, but their research results will be useful for any nation that uses artificial intelligence.

“We’re using the Middle Eastern data as a test for our model. And if [it works], it can be used for any other culture or nation,” he said. 

Using this data, the researchers aim to teach artificial intelligence how to withhold its biases while interacting with users. Ultimately, their goal is to make AI more objective and safer for humans to use, said Amin. 

“Responsible AI is not just responsibly using AI, but also ensuring that the technology itself is responsible,” said Amin, who has previously helped other countries with building artificial intelligence systems. “That is the framework that we’re after—to define it, and continue to build it.” 

Ruhul Amin and Mohamed Rahouti
Amin and Rahouti

]]>
182413
Machine Learning Isn’t Just for Computer Science Majors, Professors’ Award-Winning Study Shows https://now.fordham.edu/university-news/machine-learning-isnt-just-for-computer-science-majors-professors-award-winning-study-shows/ Thu, 20 Jul 2023 17:25:11 +0000 https://news.fordham.sitecare.pro/?p=174791 Machine learning doesn’t have to be hard to grasp. In fact, learning to apply it can even be fun—as shown by three Fordham professors’ efforts that earned them a new prize for innovative instruction.

Their method for introducing machine learning in chemistry classes has been honored with the inaugural James C. McGroddy Award for Innovation in Education, named for a donor who funded the award’s cash prize. (See related story.)

The recipients are Elizabeth Thrall, Ph.D., assistant professor of chemistry; Yijun Zhao, Ph.D., assistant professor of computer and information science; and Joshua Schrier, Ph.D., the Kim B. and Stephen E. Bepler Chair in Chemistry. They will share the $10,000 prize, awarded in April.

Chemistry and Computation Come Together

The three awardees’ project shows how to reduce the barriers to learning about programming and computation by integrating them into chemistry lessons. The project came together during the COVID pandemic—since chemistry students were working from their computers, far from the labs on campus, it made sense to give them some computational projects, in addition to experiments they could conduct at home, Thrall said.

Joshua Schrier
Joshua Schrier

Because little had been published about teaching machine learning to chemistry students, she got together with Schrier and Zhao to design an activity. Zhao, director of the Master of Science in Data Science program at Fordham, involved a student in the program, Seung Eun Lee, GSAS ’22, who had studied chemistry as an undergraduate.

Their first classroom project—published in the Journal of Chemical Education in 2021—involves vibrational spectroscopy, used to identify the chemical properties of something by shining a light on it and recording which wavelengths it absorbs. Students built models that analyzed the resulting data and “learned” the features of different molecular structures, automating a process that they had learned in an earlier course.

Elizabeth Thrall
Elizabeth Thrall

For another project, the professors taught students about machine-learning tools for identifying possible hypotheses about collections of molecules. Machine learning lets the students winnow down the molecular data and, in Schrier’s words, “make that big haystack into a smaller haystack” that is easier for a scientist to manage. The professors designed the project with help from Fernando Martinez, GSAS ’23, and Thomas Egg, FCRH ’23, and Thrall presented it at an American Chemical Society meeting in the spring.

Thumbs-Up from Students

How did students react to the machine learning lessons? According to a survey following the first project, 63% enjoyed applying machine learning, and 74% wanted to learn more about it.

“I think that students recognize that these are useful skills … that are only going to become more important throughout their lives,” Thrall said. Schrier noted that students have helped develop additional machine learning exercises in chemistry over the past two years.

Machine Learning in Education and Medicine

Yijun Zhao
Yijun Zhao

Zhao noted the growing applications of machine learning and data science. She has applied them to other fields through collaborations with Fordham’s Graduate School of Education and the medical schools at New York University and Harvard, among other entities.

The McGroddy Award came as a surprise. “I don’t think that we expected to win,” Schrier said, “just because there’s so many other excellent pedagogical innovations throughout Fordham.”

Eva Badowska, Ph.D., dean of the Faculty of Arts and Sciences at the time the award was granted, said the professors’ “path-breaking interdisciplinary work has transformed lab courses in chemistry.”

There were 20 nominations, and faculty members reviewing them “were humbled by the creativity, innovation, and generative energy of the faculty’s pedagogical work,” she said.

In addition to the McGroddy Award, the Office of the Dean of Faculty of Arts and Sciences is providing two $1,000 honorable mention prizes recognizing the pedagogy of Samir Haddad, Ph.D., and Stephen Holler, Ph.D., associate professors of philosophy and physics, respectively.

]]>
174791
Graduate Student Makes Vision Care More Accessible with Smartphone App; Project Receives NIH Funding https://now.fordham.edu/science/graduate-student-makes-vision-care-more-accessible-with-smartphone-app/ Tue, 27 Sep 2022 21:04:11 +0000 https://news.fordham.sitecare.pro/?p=164393 As part of her master’s thesis, Fordham graduate student Ciara Serpa is developing a phone app that anyone can use to detect eye diseases at an early stage. The project, which recently received $100,000 in funding from the National Institutes of Health and is being conducted with faculty member Mohammad Ruhul Amin, Ph.D., and startup company iHealthScreen, aims to help people who are at risk of losing their eyesight, especially those from underserved communities. 

An elderly couple stands by a little girl who is standing in a red playhouse.
Young Serpa with her maternal grandfather, who has had myopia since childhood, and her step-grandmother, who is now completely blind due to a diabetes-related eye disease

“I’ve seen a lot of people go blind, including my grandmother, and there are a lot of direct and indirect costs that patients suffer from,” said Serpa, a data science student in the Graduate School of Arts and Sciences. “I want to make sure that people can see as long as possible.” 

The idea for the project originally came from Amin, an assistant professor of computer and information sciences, and Alauddin Bhuiyan, Ph.D., the founder of iHealthScreen and an associate professor at Mount Sinai’s Icahn School of Medicine. While searching for thesis ideas, Serpa reached out to Amin, who then introduced her to his research with Bhuiyan. 

“Many middle-aged people have diabetes, including myself,” said Amin. “They often develop eye problems, especially age-related macular degeneration (AMD) and diabetic retinopathy. These diseases spread slowly until they reach a stage where it’s difficult to recover, but if you diagnose them early, they’re easier to manage.” 

Together, the three researchers are trying to build an app that uses artificial intelligence to detect these eye diseases at an early stage. 

Training Software to Recognize Disease Symptoms

Serpa began her thesis last fall with initial research and interviews with neurologists and ophthalmologists, who shared what they thought was needed in their field. Then she visited health care facilities in the Bronx, where she recorded images of patients’ retinas with professional equipment, focusing on patients at least 55 years old and/or diabetic. The images were then uploaded to AI software that is being trained to identify signs of AMD or diabetic retinopathy and also sent to an ophthalmologist for diagnosis. Later, Serpa compared the results from the software and the ophthalmologist to see if they both agreed on a diagnosis. 

An elderly woman and a young woman stand close to each other and smile.
Serpa and her maternal grandmother who underwent lens surgery after starting to lose her eyesight due to cataracts and other side effects of diabetes

“The software uses machine-learning and deep learning to scan images, pixel by pixel, and search for specific spots that indicate a person is at risk and should be seen by a professional for further referral,” said Serpa. “Basically, we’re training the software to know what to look for in the data and to accurately diagnose patients.”

So far, Serpa has recorded and uploaded about 100 images. Her goal is to collect more than 500 images by the end of the study, but she says that most of the time, the ophthalmologist and the software agree on a diagnosis. And the more images processed by the software, the smarter it becomes. 

“It’s like if you were to study for an exam and take 10 practice exams. If someone else takes 20, then that person might do better because they’ve practiced more,” said Serpa.  

Finally, Serpa’s team will incorporate the software into a smartphone application in which anyone can take a photo of their eye and screen themselves for eye diseases at little to no cost. 

“In the past, most researchers have used a separate camera or a removable smartphone lens instead of an actual iPhone camera, but those can cost a lot of money. We’re trying to see how accurate we can get with an iPhone camera,” said Serpa. “If people can’t afford to visit a doctor, this could be a good way to first let them know that they should see a doctor and get real imaging done because we see something that may be dangerous.” 

A Cost-Effective Form of Diagnosis

After graduating from Fordham next spring, Serpa said she hopes to work full time in the medical technology field. 

“A lot of people find databases boring, but I think it’s fascinating to find patterns in the data that can be important to a business or health care system,” said Serpa, who is originally from Monroe, New York. 

She said she not only enjoys working with data, but also interacting with patients, many of whom she can personally relate to. 

“As someone who has had a lot of chronic illnesses since I was young, I feel like I understand where they’re coming from,” said Serpa, who has asthma and has suffered from migraines and fibromyalgia since childhood.

Although her thesis will be completed by May 2023, she said she plans to continue her research post-graduation. 

“In the long run, our goal is to create a cost-effective and accurate way to know that a patient is going to lose their sight, but also help them to retain some of it,” Serpa said. “Nothing’s going to reverse the damage; we can only slow down the process. But hopefully we can find a better way to detect these diseases earlier.”  

The inside of two eyeballs through a special camera
An image of Serpa’s eye, similar to the images she has taken of patients

]]>
164393
Philip Bal Used Research, Robotics, and Real-World Solutions to Launch a Career in Computer Science https://now.fordham.edu/fordham-magazine/philip-bal-used-research-robotics-and-real-world-solutions-to-launch-a-career-in-computer-science/ Fri, 29 Apr 2022 17:37:28 +0000 https://news.fordham.sitecare.pro/?p=159956 When people consider the perks of Fordham’s New York City location, they’re not necessarily thinking about the easy access to the Bronx Zoo. Or they might think of the zoo only as a diverting way to spend a few hours or to entertain family and friends. But for Massapequa, New York, native Philip Bal, a 2019 graduate of Fordham College at Rose Hill, the Bronx Zoo offered something else: an exceptional research opportunity that helped him launch a career as a software engineer at SpaceX.

Bal initially majored in biology at Fordham, but he switched to computer science in his junior year. Working closely with Damian Lyons, Ph.D., director of the University’s Robotics and Computer Vision Lab, he used technology originally associated with gaming to help herpetologists at the zoo track and study the movements of Kihansi spray toads. The toads had been classified as extinct in the wild in 2009, but in the past decade, scientists at the Bronx Zoo, headquarters of the Wildlife Conservation Society, have been breeding the toads on site and helping to reintroduce them to their native habitat in Tanzania.

According to Lyons, Bal expanded the code to effectively track the toads solely using depth imagery. He also added a color-tracking feature so that made it possible to zero in on the toads when they moved, such as jumping onto a leaf. Bal also created new software to generate behavior analytics.

As an undergrad, Bal also was a volunteer EMT with Fordham University EMS, and he worked as a software engineer intern at Amazon, an experience he said he helped him not only get job offers but also learn “how to work professionally, scalably, and consistently in the real world.”

Today, he’s a software engineer at SpaceX, working on ground network software systems for Starlink, the aerospace manufacturer’s satellite internet service. But one day down the line, Bal said he hopes to launch his own company.

What Fordham course has had the greatest influence on you and your career path so far? How and why was it so influential?
Professor Damian Lyons’ Brains and Behaviors in Beasts and Bots. It was basically a class where we looked at different animal behaviors and then emulated them with robotics (e.g., a bug might walk around until it hits a wall, then it’ll turn and keep moving until it hits a wall, rotate, and so on. At one point we made a robot that did the same). It was a lot of fun, but I would say research outside of class was way more impactful. Classes are good for developing baseline skills, but the best way to solidify your knowledge, grow it, and put it to work is to utilize the resources available to students on campus outside of required coursework, like labs and research opportunities.

Who is the Fordham professor or person you admire the most, and why?
Definitely Lyons. Without the opportunities and encouragement he provided, I’m certain I wouldn’t have made professional progress at the same rate that I have. He introduced me to complex, real-world problems and helped me understand how to break them down into manageable chunks to create something useful. That overall thought process and all of the small nuances I learned along the way have been invaluable in my professional career.

What are you optimistic about?
I’m optimistic about our future. I think that the next few generations will have an extremely large impact on humanity’s trajectory due to their intersection with powerful and exciting technologies that they’ve grown up with, as opposed to previous generations that still remember what it was like to not have smartphones or the entire internet at their fingertips.

]]>
159956
Center for Cybersecurity Receives $4.1 Million Grant for Scholarships https://now.fordham.edu/politics-and-society/center-for-cybersecurity-receives-4-1-million-grant-for-scholarships/ Tue, 25 Jan 2022 20:31:40 +0000 https://news.fordham.sitecare.pro/?p=156683 Fordham’s Center for Cybersecurity has secured a five-year grant from the National Science Foundation (NSF) that will enable the University to provide full scholarships to undergraduate and graduate students who wish to earn a degree in cybersecurity.

The grant, “CyberCorps Scholarship for Service: Preparing Future Cybersecurity Professionals with Data Science Expertise,” was announced by the foundation on Jan. 21. It is the largest grant the Center for Cybersecurity has ever received, and is also the largest grant for cybersecurity scholarships that is funded by the federal government, according to center founder and director Thaier Hayajneh, Ph.D. Hayajneh was the principal investigator for the grant, while Gary Weiss, Ph.D., professor of computer and information sciences, was the co-principal investigator.

Recognition of Skills and Quality

Thaier Hayajneh, Ph.D., professor of computer science,
Photo by Chris Taggart

“It’s a recognition that we have the ability to produce the graduates who have the skills and the qualities to serve the nation’s needs in cybersecurity,” said Hayajneh, a University Professor in the department of computer and information sciences.

Over the course of the five years, Hayajneh said, the fund is expected to cover the full tuition, health insurance, housing, and related expenses for 44 “student years.” Students can earn scholarships for up three years of study, so an undergraduate could use the funds to cover their third and fourth year of studies and their first year of master’s studies, or their final year of undergraduate studies and two years of a master’s degree. A student who enrolls in the department’s new doctoral program could have their entire three years of study covered by the grant.

A Commitment to Service

The grant is similar to one that the center was awarded in 2020 by the United States Department of Defense. Students who accept the scholarship make a commitment to work for at least two to three years for a federal agency such as the National Security Agency. This grant is larger and more flexible though, as it is not restricted to specific individuals chosen by a government agency.

Fordham is one of eight universities to join the CyberCorps Scholarship for Service program this year, which currently includes 82 universities representing 37 states, the District of Columbia, and Puerto Rico.

Key to the landing the grant, Hayajneh said, was a demonstration of both Fordham’s excellence in the field of cybersecurity and data science and the heartfelt desire of its students to serve the United States, rather than simply parlay their degree into a lucrative career in the private sector. In a 20 minute in-person meeting in November, Fordham was able to show that its graduates have that kind of commitment, and Joseph M. McShane, S.J., president of Fordham, assured grant administrators the same.

Data Science Expertise

Fordham brings to the table unique expertise in both cybersecurity and data science, Hayajneh said, through the department of computer and information sciences and the Gabelli School of Business. Students with experience in data mining and machine learning will be better equipped to work with computer systems that can predict, and not just respond, to security breaches.

“They will not only be able to detect attacks when they occur, but they will also have the ability to predict and prevent attacks before they even occur. This is very important because you want to stop the damage before it even starts,” he said.

“After you assure them that your program is ready, and you’ll have the best graduates in cybersecurity with some data science expertise, they want to make sure that you can contribute to another component, which is that you’ll assure their success rate,’ Hayajneh said.

The Scholarship for Service program has a 95% success rate of placing graduates in government jobs; Hayajneh said that the University was able to point to previous alumni who’ve gone into careers in government as proof of Fordham students’ commitment to service.

Ultimately, he said, the scholarships, which are open to students of all majors and schools in the fall of 2022, are an ideal way to get a foot in the door of a challenging but rewarding field.

Experience and Mentorship

“Even if you graduate from the best program in the world, it’s always hard to start that first cybersecurity job because of a lack of experience,” he said.

That’s because the stakes are so much higher in the field than in others. An inexperienced network administrator can accidentally delete old records, invade someone’s privacy, or give enemies control of their networks, he said.

“This opportunity will guarantee students the ability to work in one of the top agencies in the cybersecurity field,” he said.

Additional Academic Partners

On Jan. 24 Fordham received word of another academic partnership in cybersecurity education: The University was accepted into the United States Cyber Command Academic Engagement Network. The newly formed network brings together 84 universities and colleges and government institutions such as the Cyber National Mission Force, U.S. Fleet Cyber Command, U.S. Marine Corps Forces Cyberspace Command, and the U.S. Coast Guard Cyber Command. The partnership will focus on future workforce development, applied cyber research, analytic partnerships, and cyber strategic dialogue.

 

 

]]>
156683
Data Scientist Uses Algorithms to Promote Common Good https://now.fordham.edu/science/data-scientist-uses-algorithms-to-promote-common-good/ Fri, 10 Dec 2021 19:01:16 +0000 https://news.fordham.sitecare.pro/?p=155712 Ruhul Amin, Ph.D., wants to understand patterns all around us.

And with the aid of technology and data, he says, there is nothing that one can’t sort through. Want to know what the common themes are of 30,000 books? There’s an algorithm for that, and his team developed the methods to understand how syntax and themes influence a book’s success. Maybe you’d like to help a country better manage the way it responds to a pandemic? There’s an algorithm for that—and he’s used it in studies such as “Adjusted Dynamics of COVID-19 Pandemic due to Herd Immunity in Bangladesh.”

“I feel like, as a scientist, we all dream of impacting the actual lives of people. It’s not just that we will limit ourselves to theoretical contributions only. I figured that our work could reach the public by working side by side with the government, especially policymakers. This is how I thought it would be the best way to achieve a common good,” he said.

“I love data science because, with data science, you can work on so many diverse projects.”

Predicting COVID-19 Spikes

Amin, a native of Bangladesh who joined the department of computer and information science as an assistant professor in 2019, has been focusing his data analysis tools onto an array of areas, most recently the pandemic.

In “Adjusted Dynamics,” he and four collaborators examined data from the Bangladeshi government and created a new model that tries to predict how many people will become infected with COVID-19. A new model was needed because in Bangladesh, testing is prohibitively expensive, unlike in the United States, where it’s free. This means that Bangladeshi residents wait longer to get tested after initial exposure, and because COVID-19 can be spread by people who are not showing symptoms, they may be spreading it to others, causing the positivity rate to skew higher.

They started with SIRD (Susceptible-Infectious-Recovered-Deceased), a common statistical model, and modified it using an algorithm traditionally used in physics to predict the trajectory of objects in motion, called a Kalman Filter. For each of the country’s 64 provinces, they assigned color codes of green, yellow, and red, and plotted them on a timeline from May 2021 to May 2022. Ultimately, they were able to accurately predict 95% of the time where rates of COVID would rise and where they would fall. He shared the methodology with the Bangladeshi government, which instituted some of the recommendations regarding actions such as lockdowns.

Forecasting a Book’s Success

The computational research is extremely flexible and thus highly inter-disciplinary in nature, Amin said. When he learned that one of his graduate students had earned a bachelor’s degree in English literature, they teamed up together for a project that requires a deeper understanding of both linguistics and natural language processing (NLP). Using language features such as syntax, and the conceptual framework on which a piece of literature is based, they created NLP models to make predictions about a book’s success. The model was trained on the properties of other successful books to learn either their ranking on Goodreads or the number of times they’ve been downloaded.

In a similar study, “Stereotypical Gender Associations in Language Have Decreased Over Time” (Sociological Sciences, 2020), Amin used an automated process to scan a million English language books published between 1800 and 2000, and found that while stereotypical gender associations in language have decreased over time, career and science terms still demonstrate positive male gender bias, while family and arts terms still demonstrate negative male gender bias. He then further extended the work at Fordham to produce another research, “A Comparative Study of Language Dependent Gender Bias in the Online Newspapers of Conservative, Semi-conservative and Western Countries.

The success of studies such as these has made Amin confident that he can use the technique to examine documents to detect everything from political leanings to racial bias.

Finding Patterns in Mental Health Hotline Calls

Amin is also working in the area of mental health. In collaboration with NYU and the University of Toronto, he is analyzing five years’ worth of recorded phone conversations from a popular mental health “befriending” hotline in Bangladesh. The goal is to use past records to see if any patterns emerge that can be used for the future. This could be used by healthcare professionals to better tailor messages to the public or adjust staffing levels more efficiently.

“The interesting thing is what people really discuss during, let’s say, the weekend. Is it different from the weekdays? When do you get the most calls? Is it right after you post something where you say, ‘Hey, we’re a befriending service, we’re providing this kind of help?’ When do you get suicidal calls? You can literally change this area by using this modeling,” he said.

So long as there is enough data and computing power, Amin is optimistic that the possibilities for projects using algorithms are nearly endless. One of his projects, for instance, involves the analysis of a billion tweets on Twitter that tries to ascertain what constitutes offensive and biased language. Eventually, he hopes the data collected can be deployed the way Grammarly is used to clean up grammatical mistakes, but to help us identify blind spots in our perspectives.

“I actually published a paper in gender bias, and so I thought. ‘I’m a person without any biases.’ But when I took this psychological test recently, I found that I’m still male-biased,” he said.

“We’re coming from different backgrounds, and all have these kinds of stereotypes within us. So I want to develop a tool that can suggest to you how biased and how offensive the language is that you just wrote to any person or community.”

Even Fordham itself has the potential to be a good research project; Amin has his eyes set on the collections of the University’s library system. “We’re constantly conceptualizing the whole world,” he said. “Why not Fordham?”

 

]]>
155712
Fordham Alumni Recognized Among Top 50 Cybersecurity Leaders https://now.fordham.edu/fordham-magazine/fordham-alumni-recognized-among-top-50-cybersecurity-leaders/ Fri, 05 Nov 2021 12:35:48 +0000 https://news.fordham.sitecare.pro/?p=154479 The Consulting Report has named two Fordham graduates to its list of “The Top 50 Cybersecurity Leaders of 2021,” describing them and their fellow honorees as “some of the most experienced and forward-thinking” executives and consultants in the field.

Rocco Grillo, FCRH ’89, is a managing director in the New York office of Alvarez & Marshal, where he leads multidisciplinary teams that provide cybersecurity and incident response services to clients throughout the world.

He previously held a similar global leadership position at Stroz Friedberg, a digital forensics and cybersecurity firm co-founded by Fordham graduate and trustee Edward M. Stroz, GABELLI ’79.

Grillo, who earned a bachelor’s degree in sociology at Fordham College at Rose Hill, has worked closely with both corporate clients and government agencies, including the FBI and Secret Service.

“His 25 years of experience in cybersecurity advisory services, incident response investigations, and other technical advisory services, combined with his well-established understanding of commercial sector challenges and national security objectives, have made him influential to the development of national policy in cybersecurity—including the NIST Cybersecurity Framework,” according to The Consulting Report.

Anthony J. Ferrante, FCRH ’01, GSAS ’04, also has deep experience in both the public and private sectors. A former top cybersecurity official at the White House, he is currently the senior managing director and global head of cybersecurity for FTI Consulting.

Prior to joining FTI, he was the director of cyber incident response at the U.S. National Security Council from 2015 to 2017, and he previously served as chief of staff for the FBI’s Cyber Division.

In 2009, Ferrante, then a special agent in the FBI’s New York office, helped Fordham launch the International Conference on Cyber Security. The conference, typically held every 18 months at Fordham in partnership with the FBI, brings together university researchers, top security and law enforcement officials, and executives from companies including IBM, Microsoft, and Google.

More recently, Ferrante, who earned bachelor’s and master’s degrees in computer science at Fordham, helped establish the master’s degree program in cybersecurity at the University’s Graduate School of Arts and Sciences, where has served as an adjunct professor. In 2021, he joined the executive committee of the Fordham President’s Council, a group of successful professionals and philanthropists who are committed to mentoring Fordham’s future leaders.

“We’ve seen countless students graduate from the [master’s degree] program and start successful careers in cybersecurity, helping both to reduce the growing cybersecurity skills gap and better protect organizations from the endless barrage of cyber threats,” he told Consulting magazine in 2019.

Since 2017, Fordham has been recognized by the U.S. National Security Agency and Department of Homeland Security as a National Center of Academic Excellence in Cyber Defense Education. The University is home to the Center for Cybersecurity, and its undergraduate and graduate programs emphasize both competency-based learning and applied research.

]]>
154479
Fordham Welcomes Five New Trustees https://now.fordham.edu/university-news/fordham-welcomes-five-new-trustees/ Mon, 25 Oct 2021 16:07:49 +0000 https://news.fordham.sitecare.pro/?p=153926 Fordham welcomed five new members to its Board of Trustees this year. The new trustees bring a diversity of voices from the fields of media, business, cybersecurity, and philanthropy.

Gerald R. Blaszczak, S.J., FCRH ’72
Assistant to the President and Alumni Chaplain, Fairfield University

Gerald R. Blaszczak,
Gerald R. Blaszczak

Father Blaszczak brings with him a wealth of experience in both spiritual matters and the inner workings of universities. He entered the Society of Jesus in 1967. In 1971, he received a B.Phil. from the Philosophische Hochschule Berchmanskolleg in Pullach, Germany. A year later, he earned a B.A. in classical languages and literature from Fordham College at Rose Hill. He was ordained in 1979. In 1984, he received a Ph.D. from Harvard University in New Testament and early church history, with a secondary concentration in Islamic studies.

Over the years, he served on the faculties of LeMoyne College, Hekima College in Kenya, and Fordham. From 1998 to 2004, he served as rector of the Fordham Jesuit Community and as a member of Fordham’s Board of Trustees. He was Fordham’s first Vice President for Mission and Ministry from 1999 to 2005; he left to become rector of St. Ignatius Church. He was later called to Rome by Adolfo Nicolas S.J., then the Superior General of the Society of Jesus, to serve a three-year term as the Secretary for the Service of Faith at the Jesuit curia. Upon his return to the United States in 2011, he moved to Raleigh, North Carolina, where he was the parochial vicar at St. Raphael the Archangel Church. He has been at Fairfield University since 2018.

Darryl Emerson Brown, FCRH ’75, GSAS ’79
President, Brownboys3 Inc.

Darryl Emerson Brown,
Darryl Emerson Brown

The same month that he graduated from Fordham with a master’s in broadcast journalism, Brown joined ABC Radio, where he rose through the ranks to become executive vice president and general manager. During that time, he developed African American syndicated programming such as The Tom Joyner Morning Show, and documentaries such as Love, Lust & Lies, by Michael Baisden. He was also responsible for launching the network’s Hispanic Radio Division, which produced content such as Tu Vida es Mi Vida, hosted by award-winning motivational speaker Maria Marin. In 2008, he left ABC to become president of Brownboys3 Inc, a media consulting firm that specializes in talent development, brand building, and diversified revenue development.

A member of the Fordham Athletic Hall of Fame, Brown was a three-year starter and All-American for the Rams basketball team. He graduated as the seventh leading scorer in men’s basketball history with 1,233 points and is currently ranked 18th among all players. He previously served on Fordham’s board from 2013 to 2020.

Thomas Ennis, FCRH ’88
Chief Executive Officer, P.K. Kinder Company, Inc.

Thomas Ennis
Thomas Ennis

Lipton Soups, Ortega Mexican Foods, Amplify Snacks, Oberto meat snacks—Ennis has helmed the companies responsible for countless popular supermarket products since he started working in the field of consumer branding in 1996. He majored in history as an undergraduate at Fordham and later earned an MBA in marketing, which he put to use at a series of packed goods companies. He advanced to become CEO, president, founder, and board member of Amplify Snack Brands in 2014. The firm, which grew on the strength of brands such as SkinnyPop Popcorn, Paqui Tortilla Chips, and Oatmega Protein Bars, went public in 2015 and was sold to the Hershey Company in 2018. This year, he assumed the position of chief executive officer, president, and board member of Kinder, a privately held company that makes rubs, seasonings, and sauces.

Kathleen MacLean, FCRH ’75, PAR ’15, ’18
Philanthropist

Kathleen MacLean
Kathleen MacLean

MacLean has been involved in a number of volunteer and charitable endeavors for more than 40 years. During the 26 years that her children were in school, she served on parent-teacher organizations, fundraising committees, and organizing committees. She was also very involved in Girl Scouts and held fundraising positions in local youth sports programs. She currently serves on the boards of the Hockanum Valley Community Council, a nonprofit human services agency in Vernon, Connecticut, and the Bristol Community College Foundation, in her hometown of Fall River, Massachusetts. She and her husband, Brian MacLean, FCRH ’75, PAR ’15, ’18, have supported the Hartford Foundation, the Rhode Island Foundation, and Hartford Healthcare, which concentrates on programs supporting mental health, substance abuse, and fatherhood stability.

At Fordham, the MacLeans have established the Brian and Kathleen MacLean Scholarship Fund, the MacLean Family GSS Endowed Scholarship for social work students and the Fordham Housing Fund, an endowment that provides room and board for commuter students who would most benefit from living on campus during their junior and senior years. The couple was honored at the 2016 Founder’s Dinner.

Ed Stroz, GABELLI ’79
Retired Executive Chairman, Stroz Friedberg, LLC

Ed Stroz
Ed Stroz

From 1984 to 1996, Stroz served in the FBI as a supervisory special agent, specializing in white-collar crime investigation. In 2000, he used that experience and his training as a certified public accountant and bank examiner to found Stroz Friedberg, LLC, which he built into an international digital forensics and investigations firm. He sold the firm in 2016 to Aon LC, staying on as co-president until January of this year. He’s now an independent consultant offering expert services in cybersecurity and related investigations. Stroz is the chair of the advisory board of Fordham’s International Conference on Cyber Security (ICCS). He is also a frequent participant in their conferences and serves as an adviser to the Center on Law and Information Policy at Fordham Law School. He and his wife fund the Edward M. Stroz and Sally Spooner Endowed Chair in Accounting at Fordham. The couple was honored at the 2015 Founder’s Dinner. Most recently, Stroz was awarded an honorary doctor of humane letters degree in 2021 from Fordham. He previously served as a member of Fordham’s board from 2011 to 2018.

 

 

]]>
153926
The Promise and Peril of Artificial Intelligence https://now.fordham.edu/politics-and-society/the-promise-and-peril-of-artificial-intelligence/ Thu, 30 Sep 2021 13:50:32 +0000 https://news.fordham.sitecare.pro/?p=153073

The concept of artificial intelligence has been with us since 1955, when a group of researchers first proposed a study of “the simulation of human intelligence processes by machines.” At the same time, it seems like not a day goes by without news about some development, making it feel very futuristic.

It’s also the purview of professors from a variety of fields at Fordham, such as Damian Lyons, Ph.D., a professor of computer science, R.P. Raghupathi Ph.D., a professor of information, technology and operations at Gabelli School of Business, and Lauri Goldkind, Ph.D., a professor at the Graduate School of Social Service.

Listen below:

Full transcript below:

Patrick Verel: Artificial intelligence is many things to many people, on the one hand, the concept has been with us since 1955 when a group of researchers first proposed a study of, “The simulation of human intelligence processes by machines.” At the same time, it seems like there isn’t a day that goes by without news of some new development, making it feel very futuristic. Need to call your pharmacy, a chat bot will answer the call, approaching another car on the highway while in cruise control, don’t worry your car will slow itself down before you plow into it. Just this month, the New York Times reported that an Iranian scientist was assassinated in November by an AI assisted robot with a machine gun.

Damian Lyons
Damian Lyons

Here at Fordham, Damian Lyons is a professor of computer science on the faculty of arts and sciences. R.P. Raghupathi is a professor of information, technology and operations at the Gabelli School of Business. And Lauri Goldkind is a professor at the Graduate School of Social Service. I’m Patrick Verel, and this is Fordham News. 

Dr. Lyons, you’ve been following this field for 40 years and have witnessed some real ebbs and flows in it, why is this time different?

Damian Lyons: Well, the public perception of artificial intelligence has had some real ebbs and flows over the years. And while it’s true that humanity has been trying to create human-like machines almost since we started telling stories about ourselves, many would trace the official birth of AI as a field, to a workshop that occurred at Dartmouth University in the summer of ’56. And it’s interesting that two of the scientists at that workshop had already developed an AI system that could reason symbolically, something which was supposed to be only doable by humans up until then. And while there was some successes with those efforts, by and large AI did not meet the enthusiastic predictions of its proponents, and that brought on what has often been called the AI winter, when its reputation fell dramatically. In the 70s, things started to rise a little bit again, AI began to focus on what are called strong methods. Those are methods that make use of the main specific information rather than general-purpose information to do the reasoning.

So the domain expertise of a human expert could be embodied in a computer program, and that was called an expert system. For example, the MYCIN expert system was able to diagnose blood infections as well as some experts and much better than most junior physicians. So expert systems became among the first commercially successful AI technologies. The AI logistics software that was used in the 1991 Gulf War in a single application was reported to have paid back all the money that the government spent funding AI up until this point. So once again, AI was in the news and they were riding high, but expert systems again, lost their luster in the public eye because of the narrow application possibilities and AI reputation once again deemed, not as bad as before, but it deemed once again. But in the background coming up to the present date, there were two technology trends that were brewing.

The first was the burgeoning availability of big data via the web and the second was the advent of multi-core technology. So both of these together set the scene for the emergence of the latest round in the development of AI, the so-called deep learning systems. So in 2012, a deep learning system, not only surpassed its competitor programs at the task of image recognition but also surpassed human experts at the task of image recognition. And similar techniques were used to build AI systems to defeat the most experienced human players at games such as Go and chess and to autonomously drive 10 million miles on public roads without serious accidents. So once again, predictions about the implications of AI are sky-high.

PV: Now, of all the recent advances, I understand one of the most significant of them is something called AlphaFold. Can you tell me why is it such a big deal?

DL: AlphaFold in my opinion, is a poster child for the use of AI. So biotechnology addresses issues such as cures for disease, for congenital conditions, and maybe even for aging, I’ve got my fingers crossed for that one. So proteins are molecular chains of amino acids, and they’re an essential tool in biotechnology, in trying to construct cures for diseases, congenital conditions, and so forth. And the 3D shape of a protein is closely related to its function, but it’s exceptionally difficult to predict, the combinatorics in predicting the shape are astronomical. So this problem has occupied human attention as a grand challenge in biology for almost 50 years, and up until now, it requires an extensive trial and error approach to lab work and some very expensive machinery in order to do this prediction of shape. But just this summer Google’s DeepMind produced the AlphaFold 2 AI program, and AlphaFold 2 can predict the 3D shape of proteins from their amino acid sequence with higher accuracy, much faster, and obviously much cheaper than experimental methods. This has been held in biology as a stunning breakthrough.

PV: R.P. and Lauri, do you have any thoughts on things that are unsung?

W.P. Raghupathi
W.P. Raghupathi

R.P. Raghupathi: I would just add medicine is a good example, the whole space of medicine, and like Damian mentioned with the image recognition is one of the most successful in radiology. Where now radiologists are able to spend more time at a high level, looking at exception cases that are unusual as opposed to processing thousands and thousands of images, doing the busywork. So that’s been taken out, with a great deal of success. So Neuralink is another example, I’m just excited that we can hopefully solve some of our brain problems, whether through accidents or Parkinson’s or Alzheimer’s with brain implants, chip implants, and that’s terrific progress. I mean, just more recently with drug discovery, extending what Damien said, vaccine development, drug development has accelerated with AI and machine learning. There’s of course, for me, the interest is also just quickly social and public policy and so Lauri will speak to that. I’m just looking at how even being data driven in our decision making in terms of the UN Sustainable Development Goals or poverty elevation or whatever, just looking at the data, analyzing it with AI and deep learning, give us more insight.

Lauri Goldkind: It’s funny R.P. I didn’t know that we were going to go in this direction in particular, but the UN has a research roadmap for a post-COVID world, which hopefully we’ll be in that world soon. But in this research roadmap, it talks a lot about using AI and it also talks about data interoperability and so data sharing at the country level in order to be both meet the sustainable development goals, but also to meet even possibly more pressing need. So pandemic recovery, cities recovering from natural disaster, and it definitely amplifies a need for data interoperability and deploying AI tools for these social good pieces and for using more evidence in policymaking. Because there’s the evidence and there’s advancements and then there’s the policymakers and building a bridge between those two components.

Lauri Goldkind
Lauri Goldkind

PV: Dr. Lyons, you mentioned this notion of talking about the advances for science or being a good thing and a positive thing. I know that there are also fears about AI that veer into the existential realm, on thinking of this notion that robots will become self-aware. And I’m gen X so of course, my frame of reference for everything is the Terminator movies and thinking about Skynet, which comes to life and endangers human existence, as we know it. But there’s also this idea within the field that the concept of silos will make that unlikely or not as likely as maybe people think. Can you explain it a little bit about that?

DL: Yeah, sure. That’s a very good point, Patrick. So games like chess and Go and so forth were an early target of AI applications because there’s an assumption there, there’s an assumption that a human who plays chess well must be intelligent and capable of impressive achievement in other avenues of life. As a matter of fact, you might even argue that the reason humans participate in these kind of games is to sharpen their strategic skills that they can then use to their profit and other commercial or military applications. However, when AI addresses chess, it does so by leveraging what I called previously, these strong methods, so they leverage domain expertise in chess. Despite its very impressive strategy at playing Go, the AlphaGo program from DeepMind, can’t automatically apply the same information to other fields. So for example, it couldn’t turn from playing, Go in the morning to running a multinational company effectively in the afternoon, as a human might, we learn skills which we can apply to other domains, that’s not the case with AI.

AI tools are siloed and I think an excellent warning case for all of us is IBM’s Watson. Where is Watson? Watson is a warning for hubris, I think in this regard, it has not remade the fortune of IBM or accomplished any of the great tasks foretold, they’ve tuned down their expectations, I believe in IBM and there are applications for which a technology such as Watson could be well used and profitable, but it was custom built for a quiz show, so it’s not going to do anything else very easily. AI tools and systems are still developed in domain silos, so I don’t believe that the sentient AI scenario is an imminent press. However, the domain-specific AI tools that we have developed could still be misused, so I believe the solution is educating the developers and designers of these systems to understand the social implications of the field. So we can ensure that the systems that are produced are safe and trustworthy and used in the public good.

PV: Dr. Raghupathi, now I know robots long ago replaced a lot of blue-collar jobs, I’m thinking for instance of car assembly lines, now I understand they’re coming for white-collar jobs as well. In 2019, for instance, a major multinational bank announced that as part of the plan to lay off 18,000 workers, it would turn to an army of robots as it were, what has changed?

RP: So I just go back to what Damien mentioned in the beginning. I mean, two trends have impacted organizations and businesses in general. So one is the rapid advances in hardware technologies, both storage as well as speed, so those have enabled us to do more complex and sophisticated things. And number two is the data, which also he mentioned, that all of a sudden corporations have found they’re sitting on mountains of data and they could actually use it with all this computing power. So those two trends confluence together to make it an ideal situation where companies are now using AI and other techniques to automate various processes. It is slow and we have a lot to learn because we don’t know how to handle displacement and layoffs and so on, so companies have started with basic robotic process automation, first automating routine and repetitive tasks. But we also see now more advanced work going on, like in the example you mentioned that banks, trading companies, hedge funds are using automated trading, algorithmic trading, that’s all machine learning and deep learning. So those are replacing traders.

PV: What kind of jobs do you think are going to be the most affected by AI going forward?

RP: Well, all at both ends, we know that the routine, for example, in a hospital admissions process or security checks or insurance crossing, all of those, any data-driven is already automated. And then from the prior examples, now you call your insurance company for good or bad, you’re going to go through this endless loop of automated voice recognition systems. Now the design of those is lacking quite a bit in terms of training them on different accents, they never understand my accent. So I just hit the zero button like five times and then I will have a human at the other end or I would say, blah, blah, blah and the system gets it and really it works.

Then we have now the more advanced, and so the financial trading is an example, but also in healthcare, the diagnosis, the diagnostic decision making like the example that was mentioned, reading MRI images and CT scan images and x-rays, that’s pretty advanced work by radiologists. And now the deep learning systems have taken over and they’re doing an excellent job and then the radiologists are there to supervise, keep an eye on outliers and exceptions for them.

PV: I’m glad to hear that I’m not the only one who, when I get an automated voice on the other end of the line that I just hit zero, just say, “Talk to a person, talk to a person, talk to a person.”

RP: Try blah, blah, blah, it works better, to cut to the chase.

LG: Even in my field in social work, automation, and chat is beginning to take over jobs. And so I’m working with a community partner, that’s using a chatbot as a coach for motivational interviewing, which is an evidence-based practice. And one of the challenges in evidence-based practices is how faithful the worker is to implementing the strategy of the practice. And we’re now seeing, instead of having a human coach to do technical assistance on implementing a particular practice, agencies are turning to chat because it’s efficient. So if I don’t have to pay a human coach, I can train three more workers using this chat strategy. And so we think in these highly professionalized settings that people have job security and job safety versus automation and that’s actually just not the case anymore.

PV: What implications do these advancements have for other countries?

DL: I think there are developed countries and undeveloped countries, one potential advantage that AI holds for the future is in my own area of research, which is the applications of AI and robotics. And that’s the area of what’s called precision agriculture, so the idea being that rather than spraying large areas with pesticides or covering areas with fertilizer, you use AI technology and the embodiment of AI technology in ground robots and robot drones to target specific areas, specific spatial areas. So that if you’ve got pests growing on a particular line of tomato plants or coffee plants, then you can target your pesticide to just those areas. You can even use mechanical means to pull up weeds just as people do rather than flying a plane overhead and spraying all kinds of nasty pesticides and other stuff which ruin the environment.

LG: I was thinking on the more positive side, the use of chat technologies in mental health and whole language processing in mental health and things like avatar therapy, in scenarios where there are no providers, the AI has a real possibility of benefit in order to serve people who might not otherwise be served. And so there’s a growing understanding that depression and social connection and wellbeing are interrelated and are mental health challenges that are certainly related to climate change and future work and all those other pieces. But one way to meet that growing mental health need is to use artificial intelligence to deliver services. And so on the positive side, I think there’s an opportunity to grow AI strategies in mental health.

RP: I think Patrick, some of these implications are not just for developing other countries, but even our country and the developed countries. I mean, take the retraining of the workforce that was alluded to, we don’t have any for even the transfer to clean technologies from the coal mines. I mean, what are those people going to do if we shut down the coal mines? Are we training them in the manufacture and use of advanced energy technologies? And likewise in the last election, there were some talk, Andrew Yang and others have had universal income, a lot of research is going on about it, the cost-benefit analysis, so some kind of safety net, some social policy as we handle this transition to an automated workforce is needed.

LG: I mean, let’s be really clear, the reason that Silicon Valley is interested in a universal basic income is because there’s a dramatic understanding about what the future of employment is going to look like. And as in the US is a global North country and we have a very strong ethos about work and a work identity. And when there are no jobs, it’s going to be really challenging even for traditional middle-class jobs to figure out their role with regard to working alongside AI.

PV: Now, Dr. Goldkind, this summer, you wrote a paper actually, and you said that social work must claim a place in the AI design and development, working to ensure that AI mechanisms are created, imagined and implemented to be congruent with ethical and just practice. Are you worried that your field is not as involved in decisions about AI development as it should be?

LG: I think that we have some catching up to do and I think that we have some deep thinking to do about how we can include content like AI and automated decision making and robotics and generalized intelligence versus specialized intelligence in AI into our curricula. And to Damien’s earlier point, I think that the same way that our engineering students should be trained with an ethical lens or minimally, a lens on who might be an end user of some of these tools and what those implications might be, that social work students and prospective social work professionals should also have a similar understanding of the consequences of AI use and AI mechanisms. And so I think that there’s a lot of room for growth in my discipline to catch up and to also be partners in how these systems are developed. Because social work is bringing this particular lens of an ecosystem model and a person in an environment approach and a respect for human dignity.

And by no means suggesting that a business student or a computer science student is not as un-respective of human dignity, but in social work, we have a set of core values that folks are opting into. And we are not, I think, preparing students to be critical about these issues and think deeply about the implications of when they’re seeing a client who’s been accessed by an AI or a robot, what are the tools and strategies we might use to help that person be synthesized back into their community in a way that’s meaningful, on one hand. And on the other hand in the AI world, there’s a huge conversation about fairness, accountability, and transparency, and ethics in AI, and social work has a code of ethics and has a long history of applying those codes. And so could be a real value add to the design and development process.

PV: Yeah. I feel like when we talked before this, you mentioned this idea of having graduates getting used to this idea of working alongside AI, not necessarily being replaced by it. Can you talk a little bit about that?

LG: Sure. I think the idea about AI augmentation rather than AI automation is whereas these pieces are evolving is where it seems to be headed. And I think it would be useful for us as social work educators to think about how are we helping our students become comfortable with an augmented practice that uses an AI in a positive light? And so, for example, in diagnosis, in the mental health world, AI can make a more accurate assessment than a human can, because the AI is built as to our peace point earlier about radiology, the AI is trained to do this one specific thing. And so similarly in mental health, it would be great if we were teaching students about how these tools can be deployed so they can work on higher-order decision making or alternative planning and strategies and use the AI in a complementary fashion as opposed to being just completely automated.

PV: I think about jobs, so much of this conversation revolves around jobs and oh, I’m going to lose my job to a robot. And in your field, it seems like that is never going to be the case because there’s such a huge demand for mental health services, that there’s no way the robots can physically replace all the people.

RP: Social services can be delivered, again, more effectively with now the AI, the technologies, but also the data-driven approaches. I mean, every agency is swamped with cases and workloads, sometimes it’s taking years to resolve whether it’s placing a child in a foster home or whatever. So I think these technologies will help process the data faster and more effectively and give that information, the insight to the counselors, to the managers, to the caseworkers. And so they could spend more time dealing with the high-level issues than with paper pushing or processing data, so there is really great benefit over there, again, to at least automate some of the routine and repetitive parts.

LG: Oh, absolutely. And also in terms of automated decision making and even operations research and bringing some of those strategies from predictive analytics and exploratory data analysis into mental health providers, or community health providers and other providers of human services. Where we could deploy resources in a really strategic way that the agencies don’t have the capacity to do in human decision making and AI or a good algorithm can make really efficient use of this data that people are already collecting.

DL: I just want to chime in on that. That’s such an interesting discussion and I guess I feel a little out of place because I’m going to say something I normally don’t say, which is that now you’re making me very worried about the application of AI. So we already know that there are lots of issues in the way people develop AI systems, engineers or computer scientists developing the systems don’t always take a great deal of care to ensure that their data is necessarily well-curated or represented from a public good perspective. But now if we’re going to use those systems to help to counsel, to interact with vulnerable humans, then there’s a tremendous opportunity for misuse, corruption, accidental mistake. So I’m a little worried. I think we have to be really careful if we do something like that, and I’m not saying that there isn’t an opportunity there, but I’m saying that that’s a case where the implications of the use of AI are pretty dramatic even with the current state of AI. So we probably want to be very careful how we do that.

LG: In a perfect world, I would have my social work students cross-trained with your CS students, because I do think that there’s a real value to having those interdisciplinary conversations where people become aware of unintended consequences, or possible biases that can be embedded in data and what that means for a particular application. But I also want to just note that the same way the universal basic income has been discussed as a bomb for future work type issues, predictive analytics, and automated decision making is in place in social services. And so it’s being used and not even tested, but really used in triaging cases in child welfare, as one could imagine, not without controversy. Allegheny County is probably the most developed county there in Pennsylvania, who’ve deployed automated decision-making to help triage cases of child welfare abuse and neglect. And it’s really another case of decision-making to support human workers, not supplanting human workers.

PV: Have any specific innovations in the field made you optimistic?

DL: Can you define what you mean by optimistic? So for example, if sentient AI was developed tomorrow, I’d be over the moon, I would think this would be great, but I would think that other people would say that this was the worst thing that could happen. So maybe you need to be a little more specific about what optimism means in this case.

PV: I guess the way I’m thinking about it is, when you think about the advances that we’ve made so far, and you see where things are going, in general, what do you feel is going to be the most positive thing we’ll be seeing?

RP: Medicine is I think one area, I mean, just so fascinating, the fact that we can give back people some of their lives in terms of Parkinson’s or Alzheimer’s as a result of wars and strokes. And then combined with what Damien said about the biological aspect, decoding proteins, et cetera, it’s just, so drug discovery of solving health and medical problems, I think is one area, it’s just outstanding and then stunning, I would continue to follow that one.

LG: I also think in robotics specifically, which is underneath the broad umbrella of AI, there’s some real advances in caregiving. And I think that that has broad application as we’re an aging society and not just in the US, but internationally with not enough caregivers to offer support and daily living skills and daily living support to older adults in facilities and out, and keeping people in their homes. There’s so many advances to support independent living for older persons that will be automated, from caregiving robots, to smart homes and internet of things advances, that use the big data we’ve been talking about to help support somebody be independent in a community. And I think that those pieces show significant promise in a way that humans won’t be able to catch up fast enough.

RP: I must add to that. I mean, I’ve been following the remote monitoring of senior citizens experiments done in various countries. We are a little behind, but Japan has been just so way ahead, 20 years ahead, that once a picture of this wonderful old lady, 85 years old sitting in a bathing machine, like a washing machine, and she was going through all the cycles and the article stopped it when she got into the spin cycle, you probably need an attendant to switch it off.

DL: One of the things that does make me feel good about the progress of AI in societies, that there’s been already attention to understanding the restrictions that need to be placed in AI. For example, winding back to one of the very first examples you gave in this talk, Patrick, lethal autonomous weapons. So there’s been a number of attempts and conferences and meetings to understand how we’re going to deal with this issue of legal autonomous weapons. There have been organizations such as the Future of Life and its objective is to understand how technologies such as AI, which present an existential threat to human lives could be dealt with and could be used effectively, but constrained enough, and early enough, constrained early enough, that it was useful.

So with AI, I think we’re at that point, we can talk about trying to get folks to sign on to a lethal autonomous weapons pledge, which the Future of Life organization is trying to do. Or at least understand what the issues involved are and ensure that everybody understands that now before the lethal autonomous weapons are really at a serious stage, where we can no longer control the genie, it’s out of the bottle at that point. So that’s something that makes me feel optimistic.

]]>
153073