AI – Fordham Now https://now.fordham.edu The official news site for Fordham University. Thu, 17 Oct 2024 17:44:39 +0000 en-US hourly 1 https://now.fordham.edu/wp-content/uploads/2015/01/favicon.png AI – Fordham Now https://now.fordham.edu 32 32 232360065 Using Generative AI to Outsmart Cyberattackers Before They Strike https://now.fordham.edu/science-and-technology/using-generative-ai-to-outsmart-cyber-attackers-before-they-strike/ Wed, 16 Oct 2024 22:41:21 +0000 https://now.fordham.edu/?p=195729 With online threats on the rise around the world, one Fordham professor is working on a potentially revolutionary way to head them off and stay one step ahead of the cybercriminals. And it has a lot to do with the tech that powers everyday programs like ChatGPT.

That tech, called generative AI, holds the key to a new system “that not only anticipates potential attacks but also prepares systems to counteract previously unseen cyberthreats,” said Mohamed Rahouti, Ph.D., assistant professor in the computer and information science department and one of Fordham’s IBM research fellows.

He and a crew of graduate students are working on new systems that, he said, are needed to get ahead of sophisticated attacks that are constantly evolving. Their focus is a type of easy-to-launch attack that has proved crippling to companies and government agencies ever since the internet began.

Denial of Service Attacks

Cybercriminals sometimes overwhelm and freeze a company’s or government agency’s computer systems by bombarding them with way more internet traffic than they can handle, using multiple computers or multiple online accounts. This is known as a distributed denial of service attack, or DDOS.

A typical attack could cost a company $22,000 a minute, he said. Nearly 30,000 of them take place every day around the world. Many of them are foiled by programs that use machine learning and artificial intelligence.

But those programs don’t always know what to look for, since they typically rely on snapshots of past traffic, Rahouti said. Another challenge is the growing number of internet-connected devices, from smart watches to autonomous vehicles, that could provide cybercriminals with new avenues for attack.

Generative AI

Hence the research into using generative AI, which could produce a far wider range of possible attack scenarios by working upon computer traffic data to make new connections and predictions, he said. When it’s trained using the scenarios produced by generative AI, “then my machine learning/AI model will be much more capable of detecting the different types of DDOS attacks,” Rahouti said.

Mohamed Rahouti
Photo of Mohamed Rahouti by Chris Gosier

To realize this vision, Rahouti and his team of graduate students are working on several projects. They recently used generative AI and other techniques to expand upon a snapshot of network traffic data and create a clearer picture of what is and isn’t normal. This helps machine learning programs see what shouldn’t be there. “We were amazed at the quality of this enhanced picture,” Rahouti said.

This bigger dataset enabled their machine learning model to spot low-profile attacks it had previously missed, he said.

Large Language Models

For their next project, they’re studying a large language model—the kind that powers ChatGPT—for ideas about how generative AI can be applied to cybersecurity. They’re using InstructLab, an open-source tool launched by IBM and Red Hat in May.

With all the companies and university researchers invested in new uses for generative AI, Rahouti is optimistic about its future applications in cybersecurity. The goal is to develop a system that runs on its own in the background, detecting both existing and emerging threats without being explicitly told what to look for.

“At present, we don’t have a fully autonomous system with these capabilities,” Rahouti said, “but advancements in AI and machine learning are moving us closer to achieving this level of real-time, adaptive cybersecurity.”



]]>
195729
Forbes: Gabelli School Expert Says It’s Too Soon To Tell if AI Rewards Are Worth the Risks https://now.fordham.edu/in-the-media/forbes-gabelli-school-expert-says-its-too-soon-to-tell-if-ai-rewards-are-worth-the-risks/ Tue, 27 Aug 2024 18:06:03 +0000 https://now.fordham.edu/?p=193883 W. Raghupathi, professor of information, technology, and operations, said the benefits of artificial intelligence are still difficult to measure. Read more in When Will AI’s Rewards Surpass Its Risks?

“Introducing new technology is always a major challenge in any organization, and AI is pretty complex,” W. Raghupathi, professor at Fordham University’s Gabelli School of Business, told Forbes. “The scale, complexity and difficulty in implementation and deployment, the upgrades, support, etc are technology-related issues. Further, privacy, security, trust, user and client acceptance are key challenges. Justifying the cost — and we do not have good measurement models — is a major challenge.”

It’s likely even too soon to tell whether the rewards of AI are outweighing the risks, Raghupathi states. “There is a lag between deployment of applications and their impact on the business. Specific applications like low-level automation find success but high-level applications that support strategy are yet to translate into tangible benefits.”

It’s going to take time — perhaps years — “to assess the impact and benefits of complex applications versus simple applications automating specific routine and repetitive tasks,” Raghupathi points out. “Measuring the benefit is new and we do not have benchmarks or quantitative models.”

]]>
193883
Reading Philosophy with AI, Salamander Survival, and Reforestation: Grad Students Research Timely Topics https://now.fordham.edu/colleges-and-schools/graduate-school-of-arts-and-sciences/reading-philosophy-with-ai-salamander-survival-and-reforestation-grad-students-research-timely-topics/ Tue, 23 Apr 2024 08:36:50 +0000 https://now.fordham.edu/?p=188222 In the first gathering of its kind, students from the Graduate School of Arts and Sciences (GSAS) gathered at the McShane Campus Center on the Rose Hill campus on April 16 to celebrate the research that is a critical part of their master’s and doctoral studies.

“It’s really gratifying to see how many of the projects lean into our identity as a Jesuit institution,” said Ann Gaylin, dean of GSAS, “and strive to advance knowledge in the service of the greater good.”

Students displayed posters on topics that ranged from biology to theology to economics to psychology.

Nina Naghshineh, Ph.D. in Biological Sciences

Topic: The Role of Bacteria in Protecting Salamanders

How would you describe your research?
I study the salamander skin microbiome and how features of bacterial communities provide protection against a fungal pathogen that is decimating amphibian populations globally.

Why does this interest you?
I’m really interested in how microbes interact and function. My study system is this adorable amphibian, but the whole topic is so interesting because microbial communities are so complex and really hard to study. So the field provides many avenues for exploration. These types of associations are present in our guts and on our skin. I’m interested in going into human microbiome work after I graduate, so I have a lot of options available to me because of this research.

Image of Nicholas McIntosh
Nicholas McIntosh, Ph.D. in Philosophy

Nicholas McIntosh, Ph.D. in Philosophy

Topic: Using AI to Help Scholars Distill Information from a Vast Body of Texts

How would you describe your project?
It’s a digital humanities project that uses natural language processing to help read and understand many texts at once. There’s this vision we have of a really great humanities scholar who is able to know a text so well that they could almost quote it from memory. That is really difficult for us to do right now in the same way we might have when there were only a couple of touchstone classical texts.

What do you hope this will accomplish?
Scholars are scanning texts either for our classes or for our own research. So this would help us figure out, number one, how can you look at a text and be able to recognize— is this text useful for me? Number two, what are the most important concepts that we should be tracking in a text? And number three, what is the text as data telling us that maybe scholarship is overlooking or overemphasizing given traditional readings?

I would also like to show that those of us who do philosophy don’t have to be afraid of these technologies.

Siphesihle Sitole, Virginia Scherer, and Angel Villamar
Siphesihle Sitole, Virginia Scherer, and Angel Villamar

Angel VillamarSiphesihle Sitole, and Virginia Scherer, M.A. in International Political and Economic Development (IPED)

Project name: Climate Mitigation: The Role of a People’s Organization in the Philippines

What were you investigating with this research?
We looked at the role of the grassroots organization Tulungan sa Kabuhayan ng Calawis in dealing with climate mitigation. It was formed after Typhoon Ketsana hit in 2009. There is an area right outside of Manila that, over the years, has been deforested, so this organization organized to help incentivize reforestation. The farmers in the area, who are mostly women, develop the seedlings, do the land preparation, and plant the trees.

What do you hope people learn from this project?
We want to think about reforestation not as a one-time thing but as a long-term sustainable way. What incentives do you need so that you can keep doing this? We are showing that you can involve ordinary individuals at the grassroots level in something that is much bigger than them.

Group of Graduate School of Arts and Sciences Students
Students presented their research throughout the afternoon. Katherine Theiss, left, an economics Ph.D. student, shared findings about the best time to conduct surveys with women affected by intimate partner violence.
]]>
188222
Can AI Promote the Greater Good? Student and Faculty Researchers Say Yes https://now.fordham.edu/university-news/can-ai-can-promote-the-greater-good-student-and-faculty-researchers-say-yes/ Thu, 18 Apr 2024 12:55:56 +0000 https://now.fordham.edu/?p=187322 At a spring symposium, Fordham faculty and students showed how they’re putting data science and artificial intelligence to good use: applying them to numerous research questions related to health, safety, and justice in society.

It’s just the sort of thing that’s supposed to happen at an institution like Fordham, said Dennis Jacobs, Ph.D., provost of the University, in opening remarks.

“Arguably, artificial intelligence is the most revolutionary technology in our lifetime, and it brings boundless opportunity and significant risk,” he said at the University’s second annual data science and AI symposium, held April 11 at the Lincoln Center campus. “Fordham’s mission as a Jesuit university inspires us to seek the greater good in all things, including developing responsible AI to benefit society.”

The theme of the day was “Empowering Society for the Greater Good.” Presenters included faculty and students—both graduate and undergraduate—from roughly a dozen disciplines. Their research ran the gamut: using AI chatbots to promote mental health; enhancing flood awareness in New York City; helping math students learn to write proofs; and monitoring urban air quality, among others.

The event drew 140 people, mostly students and faculty who came to learn more about how AI is advancing research across disciplines at Fordham.

Student Project Enhances Medical Research

Deenan He, a senior at Fordham College at Lincoln Center, presented a new method for helping researchers interpret increasingly vast amounts of data in the search for new medical treatments. In recent years, “the biomedical field has seen an unprecedented surge in the amount of data generated” because of advancing technology, said He, who worked with natural sciences assistant professor Stephen Keeley, Ph.D., on her research.

From Granting Loans to Predicting Criminal Behavior, AI Must Be Fair

Keynote speaker Michael Kearns, Ph.D., a computer and information science professor at the University of Pennsylvania, spoke about bias concerns that arise when AI models are used for deciding on consumer loans, the risk of criminals’ recidivism, and other areas. Ensuring fairness requires explicit instructions from developers, he said, but noted that giving such instructions for one variable—like race, gender, or age—can throw off accuracy in other parts of the model.

Yilu Zhou, associate professor at the Gabelli School of Business, presenting research on protecting children from inappropriate mobile apps.
Yilu Zhou, associate professor at the Gabelli School of Business, presented research on protecting children from inappropriate mobile apps.

Audits of models by outside watchdogs and activists—“a healthy thing,” he said—can lead to improvements in the models’ overall accuracy. “It is interesting to think about whether it might be possible to make this adversarial dynamic between AI activists and machine learning developers less adversarial and more collaborative,” he said.

Another presentation addressed the ethics of using AI in managerial actions like choosing which employees to terminate, potentially keeping them from voicing fairness concerns. “It changes, dramatically, the nature of the action” to use AI for such things, said Carolina Villegas-Galaviz, Ph.D., a visiting research scholar in the Gabelli School of Business, who is working with Miguel Alzola, Ph.D., associate professor of law and ethics at the Gabelli School, on incorporating ethics into AI models.

‘These Students Are Our Future’

In her own remarks, Ann Gaylin, Ph.D., dean of the Graduate School of Arts and Sciences, said “I find it heartening to see our undergraduate and graduate students engaging in such cutting-edge research so early in their careers.”

“These students are our future,” she said. “They will help us address not just the most pressing problems of today but those of tomorrow as well.”

Keynote speaker Michael Kearns addressing the data science symposium
]]>
187322
Building a ‘Security Culture’ with a Human Touch https://now.fordham.edu/fordham-magazine/building-a-security-culture-with-a-human-touch/ Wed, 07 Feb 2024 17:07:48 +0000 https://news.fordham.sitecare.pro/?p=181608 As the founder and CEO of RevolutionCyber, a cybersecurity company that helps clients build a “security culture” within their organization, Juliet Okafor, GSAS ’03, believes that when it comes to minimizing risk, humans—not technology—are the solution.

Okafor discussed this at the 2023 Forever Learning event, At the Intersection of Human and Tech, where several other Fordham alumni also talked about their experiences in fields from journalism to fashion. During her panel, “Open AI and Cybersecurity,” Okafor recalled a lesson from a job she held prior to founding RevolutionCyber. She and her team studied the systemic failures that had made a large cruise ship company vulnerable to cyberattacks. When they spent time on one of the company’s ships, she said, it became clear that the people working there were key to identifying—and preventing—similar attacks in the future.

“The people who gave us the best information were the ones we spent the most time with, whose stories we listened to, who told us when the systems went down, how it made them feel,” Okafor said. The experience made her realize that “we have to start to think more about people and culture and behavior. Everyone was talking about security awareness. I thought, ‘We need to address security culture.’”

Okafor, who served on the GSAS Dean’s Advisory Board, credits her Fordham graduate degree in communications with helping her focus on the intersection between technology, business, and workplace culture.

“The future of cyber security is quintessentially human,” she wrote in a LinkedIn post. “As such, I truly believe cybersecurity requires a lifestyle change that we will all come to embrace as a regular part of life.”

Helping companies and people make that change is her aim with RevolutionCyber, which offers personalized employee training sessions, end-to-end assistance with cybersecurity program design and execution, and ongoing assessment options. During her presentation, she explained that AI technology can help in the quest to identify safe versus malicious behaviors by cross-comparing environments, allowing organizations to build a deeper knowledge base, enact a faster incident response, and develop better secure software.

But, she said, human concerns must always take precedence when using AI—or any technology—an approach at the heart of many of the cybersecurity programs at Fordham.

“We have to think about the humanity that is impacted by the deploying of technology. We can’t stop the AI from coming. We just have to be ready, and we need to always consider how it impacts our lives and the people around us.”

The 2024 Forever Learning event, Curating Curiosity, will take place on March 9, and you can register now.

]]>
181608
AI-Generated Movies? Just Give It Time https://now.fordham.edu/arts-and-culture/ai-generated-movies-just-give-it-time/ Wed, 31 Jan 2024 14:46:34 +0000 https://news.fordham.sitecare.pro/?p=181394 When the Writers Guild of America went on strike over the summer of 2023, one of their major grievances was the use of AI in television and movies.

A recent presentation at Fordham’s cybersecurity conference last month helped illustrate why.

“When I asked the CEO of a major movie company recently, ‘What’s the craziest thing you can imagine will happen in the next two to three years?’ he said, ‘We will have a full cinematic feature starring zero actors, zero cinematography, zero lighting, and zero set design,” said Josh Wolfe, co-founder and managing director of Lux Capital at a keynote speech on Jan. 10.

“It will all be generated.”

As an example, Wolfe, whose firm invests in new technologies, screened a fan-made movie trailer that used AI to imagine what Star Wars would look like if it had been directed by Wes Anderson.

A Threat to Storytelling

James Jennewien

James Jennewein, a senior lecturer in Fordham’s Department of Communication and Media Studies whose film-producing credits include Major League II, Getting Even with Dad, and Stay Tuned, said the prospect of AI-powered screenwriting is deeply concerning.

He called storytelling “soul nourishment” that teaches us what it means to be human.

“We’re still watching films and reading books from people who died centuries ago, and there’s something magical about an artist digging into their soul to find some kind of truth or find a unique way to express an old truth, to represent it to the culture, and I don’t think that AI is going to help make that happen more,” he said.

In many ways, AI has already infiltrated movies and TV; major crowd scenes in the show Ted Lasso were created using AI tools, for example. This summer, the directors of Indiana Jones and the Dial of Destiny used AI to render the nearly 80-year-old Harrison Ford to look like he was in his 20s.

The ability to use fewer actors in a crowd scene is obviously concerning to actors, but Jennewein said the strike was about more than just saving jobs–it’s about protecting creativity.

“We don’t want AI to create the illusion that something is original when it really is just a mashup of things that have been created before,” he said.

“Flesh-and-Blood” Films Coexisting with AI

Paul Levinson, Ph.D., a professor of communications, saw first-hand what AI can do to his own image and voice. A 2010 interview he did was recently altered by the journalist who conducted it to appear as if Levinson was speaking in Hindi.  But he is less concerned about AI taking over the industry.

He noted that when The Birth of a Nation was first screened in 1915, it was predicted that it would kill off the live theater.

Paul Levinson
Paul Levinson

Levinson predicted that in the future, the majority of what we watch will be AI-generated, but there will still be films that are made with live human actors. Just as theater co-exists with live movies, traditional movies will co-exist with AI content.

“I think we are going eventually to evolve into a situation where people aren’t going to care that much about whether or not it’s an AI-generated image or a real person,” he said.

Levinson acknowledged that AI could inflict real harm on the livelihood of actors and screenwriters, but said an equally important concern is whether those who work with AI tools get the credit they deserve.

“I’m sure people are going to think I’m out of my mind, but I don’t see a difference, ultimately, between a director who is directing actors in person and somebody who understands a sophisticated AI program well enough to be able to put together a feature-length movie,” he said.

“What could ultimately happen as AI-made films become more popular, is that films that are made with real flesh-and-blood actors will advertise themselves as such, and they’ll try to do things that maybe AI can’t quite yet do, just to push the envelope.”

]]>
181394
In Major Election Year, Fighting Against Deepfakes and Other Misinformation https://now.fordham.edu/politics-and-society/in-major-election-year-fighting-against-deepfakes-and-other-misinformation/ Wed, 24 Jan 2024 18:29:20 +0000 https://news.fordham.sitecare.pro/?p=181126 With more than 50 countries holding national elections in 2024, information will be as important to protect as any other asset, according to cybersecurity experts.

And misinformation, they said, has the potential to do enormous damage.

“It’s a threat because what you’re trying to do is educate the citizenry about who would make the best leader for the future,” said Karen Greenberg, head of Fordham’s Center on National Security.

Karen Greenberg

Greenberg, the author of Subtle Tools: The Dismantling of American Democracy from the War on Terror to Donald Trump (Princeton University Press, 2021), is currently co-editing the book Our Nation at Risk: Election Integrity as a National Security Issue, which will be published in July by NYU Press.

“You do want citizens to think there is a way to know what is real, and that’s the thing I think we’re struggling with,” she said.

At the International Conference on Cyber Security held at Fordham earlier this month, FBI Director Chris Wray and NSA Director General Paul Nakasone spoke about the possibility of misinformation leading to the chaos around the U.S. election in a fireside chat with NPR’s Mary Louise Kelly. But politics was also a theme in other ICCS sessions.

Anthony Ferrante, FCRH ‘01, GSAS ‘04, global head of cybersecurity for the management consulting firm FTI, predicted this year would be like no other, in part because of how easy artificial intelligence makes it to create false–but realistic—audio, video, and images, sometimes known as deepfakes.

Alexander Marquardt, Sean Newell, Anthony J. Ferrante, Alexander H. Southwell, seated at a table
Alexander H. Southwell, Sean Newell, Anthony J. Ferrante, and Alexander Marquardt spoke at the ICCS panel discussion “A U.S. Election, Conflicts Overseas, Deepfakes, and More … Are You Ready for 2024?”
Photo by Hector Martinez

The Deepfake Defense

“I think we should buckle up. I think we’re only seeing the tip of the iceberg, and that AI is going to change everything we do,” Ferrante said.

In another session, John Miller, chief law enforcement and intelligence analyst for CNN, said major news outlets are acutely aware of the danger of sharing deepfakes with viewers.

“We spend a lot of time on CNN getting some piece of dynamite with a fuse burning on it that’s really hot news, and we say, ‘Before we go with this, we really have to vet our way backward and make sure this is real,’” he said.

He noted that if former President Donald Trump were caught on tape bragging about sexually assaulting women, as he was in 2016, he would probably respond differently today.

“Rather than try to defend that statement as locker room talk, he would have simply said, ‘That’s the craziest thing anybody ever said; that’s a deepfake,” he said.

In fact, this month, political operative Roger Stone claimed this very defense when it was revealed that the F.B.I. is investigating remarks he made calling for the deaths of two Democratic lawmakers. And on Monday, it was reported that days before they would vote in their presidential primary elections, voters in New Hampshire received robocall messages in a voice that was most likely artificially generated to impersonate President Biden’s, urging them not to vote in the election.

John Miller seated next to Armando Nuñez
CNN’s John Miller was interviewed by Armando Nuñez, chairman of Fordham’s Board of Trustees, at a fireside chat, “Impactful Discourse: The Media and Cyber.” Photo by Hector Martinez

A Reason for Hope

In spite of this, Greenberg is optimistic that forensic tools will continue to be developed that can weed out fakes, and that they contribute to people’s trust in their news sources.

“We have a lot of incredibly sophisticated people in the United States and elsewhere who understand the risks and know how to work together, and the ways in which the public sector and private sector have been able to share best practices give me hope,” she said.

“I’m hopeful we’re moving toward a conversation in which we can understand the threat and appreciate the ways in which we are protected.”

]]>
181126
Need the Latest Research for Your Course Curriculum? AI Can Help https://now.fordham.edu/science/need-the-latest-research-for-your-course-curriculum-ai-can-help/ Mon, 22 Jan 2024 15:20:28 +0000 https://news.fordham.sitecare.pro/?p=180977 One of the biggest challenges professors face in creating their course curriculum is making sure they include the latest and most relevant research in their fields.

That’s why Michelle Rufrano, an adjunct sociology professor, decided to plan her upcoming course a little differently this time—by using a new AI tool.

Rufrano is the CEO of CShell Health, a media technology company that aims to curate health information and use it to help create social change. She worked with her business partner, Jean-Ezra Yeung, a data scientist with a master’s in public health, to develop an augmented intelligence tool that can sift through hundreds of thousands of articles of research and synthesize them into various themes.

Rufrano recently used the tool to plan her Coming of Age: Adulthood course at Fordham, sourcing readings from scholarly articles available on PubMed, an online biomedical literature database. The tool organized those articles into knowledge graphs—or geometric visualizations that map out correlations and topics that are most present in the research, without a professor having to manually sort through article titles and abstracts.

According to Rufrano, this method allowed her to plan her curriculum and readings much more efficiently.

“It cuts the research time in half,” Rufrano said. “That kind of document review would usually take me about four months of looking through all of that data. It’s down to about two weeks.”

Rufrano’s course explores the life course theory, which aims to analyze the structural, social, and cultural contexts that shape human behavior from birth to death. As a relatively unique field, Rufrano said it can be challenging to find materials, particularly those that include the most recent research. She said their AI tool is uniquely suited to solve this problem.

“I would have never found some of these studies that came up in the knowledge graphs, because they were published last month, and just would have probably escaped the regular search engines,” Rufrano said. “You would have had to put in some very specific language that you wouldn’t have necessarily known to use.”

Rufrano said it is crucial that students are exposed to a mix of current research in addition to classical works when preparing to enter careers in the field.

“That is so valuable for students who are going into a very volatile workforce. They need to have this very up-to-date information,” she said

Future Uses for the AI Tool

Rufrano and Yeung met while studying for a master’s in public health, and went on to form CShell Health, which uses augmented intelligence to reframe consumer health information and make it more accessible. The course planning model was an early experiment in what they hope will be a total reimagining of public health literacy.

“We can address really salient issues like how institutional discrimination is embedded in language,” Rufrano said. “If we can see the vulnerabilities in the data, then we can correct for the bias in the research. That’s my dream for the company.”

]]>
180977
Hackers Use AI to Improve English, Says NSA Official https://now.fordham.edu/university-news/hackers-use-ai-to-improve-english-says-nsa-official/ Wed, 10 Jan 2024 23:03:36 +0000 https://news.fordham.sitecare.pro/?p=180587 From “hacktivists” backed by foreign governments to the advantages and perils of artificial intelligence, National Security Agency (NSA) Director of Cybersecurity Rob Joyce highlighted three areas of focus in the cybersecurity field at the 10th International Conference on Cyber Security, held at Fordham on Jan. 9.

Better English-Language Outreach

The use of artificial intelligence is both a pro and con for law enforcement, Joyce said.

“One of the first things [bad actors are] doing is they’re just generating better English language outreach to their victims [using AI]—whether it’s phishing emails or something more elaborative,” he said. “The second thing we’re starting to see is … less capable people use artificial intelligence to guide their hacking operations to make them better at the technical aspect of a hack.”

But Joyce said that “in the near term,” AI is “absolutely an advantage for the defense,” as law enforcement officials are using AI to get “better at finding malicious activity.”

For example, he said that the NSA has been watching Chinese officials attempt to disrupt critical infrastructure, such as pipelines and transportation systems, in the United States.

“They’re not using traditional malware, so there’s not the things that the antivirus flags,” Joyce said.

Instead, he said they’re “using flaws” in a system’s design to take over or create accounts that appear authorized.

“But machine learning AI helps us surface those activities because those accounts don’t behave like the normal business operators,” Joyce said.

‘Hacktivists’ Role in Israel-Hamas Conflict

Joyce said one of the biggest challenges for cybersecurity officials is understanding who is conducting cyber attacks and why. For example, while cyber officials have been seeing an uptick in “hacktivists,” or hackers who are activists, they’ve been seeing more foreign governments backing them and posing as them.

“The Israel-Hamas conflict going on right now—there’s a tremendous amount of hacktivist activity, and we see it on both sides of the equation,” Joyce said. “But the interesting piece in some of this is the nation-states are increasingly cloaking their activities in the thin veil of activists’ activity—they will go ahead and poke at a nation-state, poke at critical infrastructure, poke at a military or strategic target, and try to do that in a manner that looks to be this groundswell of activist activity. That’s another place where we need that intelligence view into really what’s behind the curtain, because not all is as it seems.”

Unclassifying Information: ‘A Sea Change’

Joyce said that one of the biggest “sea” and “culture” changes at the NSA is sharing classified information with the private sector.

“We’re taking our sensitive intelligence, and we’re getting that down to unclassified levels that work with industry,” Joyce said, “Why? Because there might be one or two people in a company who are cleared for that intelligence, but chances are the people who can do something about it, they’re the folks who actually are not going to have a clearance.”

Joyce said that the department has decided to shift its stance around sharing in intelligence in part because “what we know is not nearly as sensitive as how we know it” and because “knowing something really doesn’t matter if you don’t do something about it; industry is the first that can do something about it.”

]]>
180587
FBI and NSA Directors on 2024 Elections: Worry About Chaos, Not Vote Count https://now.fordham.edu/university-news/fbi-and-nsa-directors-on-2024-elections-worry-about-chaos-not-vote-count/ Tue, 09 Jan 2024 23:45:28 +0000 https://news.fordham.sitecare.pro/?p=180566 Ahead of the 2024 presidential vote, FBI Director Chris Wray and NSA Director General Paul Nakasone warned of potential threats that could interfere with the election, but said that Americans should feel confident in their ballots.

“Americans can and should have confidence in our election system,” Wray said. “And none of the election interference efforts that we’ve seen put at jeopardy the integrity of the vote count itself in any material ways. And so in that sense, people can have confidence.” 

But that doesn’t mean there aren’t threats to the election process, he said, particularly highlighting foreign governments’ desire to meddle. 

“The other part, though, is the chaos, and the ability to generate chaos is very much part of the playbook that some of the foreign adversaries engage in,” Wray said. “And there is the potential. If we’re not all collectively on board, that chaos can ensue to varying levels.”

Wray and Nakasone spoke in a fireside chat moderated by Mary Louise Kelly, host of NPR’s All Things Considered, at the 10th International Conference on Cyber Security, held at Fordham on Jan. 9. Kelly asked how 2024 compares to the 2020 election year.

“Every election as you know is critical infrastructure,” Nakasone said. “We have to be able to deliver a safe and secure outcome. And so when I look at it, I look in terms of both the threat and the technology—but yes, it’s an important year, it’s a presidential election year, and we have adversaries that want to take action.”

Protecting America’s AI Innovation 

Nakasone said that as they look at foreign adversaries and how they are using AI, he noticed that they “are all using U.S. AI models, which tells me that the best AI models are made by U.S companies.” 

“That tells me that we need to protect that competitive advantage of our nation, of our national economy going forward,” he said. 

But that’s not an easy task, Wray added, noting China’s advantage in particular.

“China has a bigger hacking program than that of every other major nation combined and has stolen more of Americans’ personal and corporate data than every nation, big or small, combined,” he said. “If I took the FBI’s cyber personnel and I said, ‘Forget ransomware, forget Russia, forget Iran—we’ll do nothing but China,’ we would be outnumbered 50 to 1, and that’s probably a conservative estimate.” 

Nakasone said that’s why it’s important for the agencies to maintain the United States’ “qualitative advantage.”

“How do we ensure that our workforce is continuing to be incredibly productive?” he said.

Combatting Foreign Adversaries 

In addition to China, Wray and Nakasone highlighted Russia and Iran as threats, even as Russia is occupied with the war in Ukraine. 

“If anything, you could make the argument that their focus on Ukraine has increased their desire to focus on trying to shape what we look like, and how we think about issues because U.S. policy on Ukraine is something that obviously matters deeply to their utterly unprovoked and outrageous invasion of Ukraine,” Wray said.  

In order to combat their efforts to interfere in elections, Nakasone highlighted partnerships between agencies like the NSA and FBI, and the quality of work that U.S. agencies do.

“It will never be having the most people—it’s having the best people and the best partnership being able to develop and deliver outcomes that can address adversaries,” he said.

Calling Out Misinformation and Disruptions

Kelly highlighted a recent poll from The Washington Post that found that one-third of Americans believe that President Joe Biden’s win in 2020 was illegitimate and that a quarter of Americans believe that the FBI instigated the January 6 insurrection. 

“I’m not trying to drag either of you into politics,” she said. “But what kind of charge does that pose for your agencies as you try to navigate this year?”

Wray said it’s important for the NSA and FBI to call out misinformation right away. He highlighted how in October 2020, the FBI called out Iran’s interference efforts ahead of the November elections in an effort to make the messaging less effective.

“We have to call it out when we see it, but we also need in general for the American people, as a whole, to become more thoughtful and discerning consumers of information,” he said. 

The Use of Section 702: ‘A Vital Tool’

In December 2023, Congress gave a four-month extension to Section 702 of the Foreign Intelligence Surveillance Act (FISA), which allows intelligence agencies to conduct surveillance on non-American citizens who are outside of the United States without a warrant. The section has come under scrutiny as privacy advocates and members of both parties said it’s an overreach of government powers.

Nakasone called it “the most important authority we use day in and day out in the National Security Agency to protect Americans.”

He said that the agency uses it to address a number of different threats: “whether or not that’s fentanyl or Chinese precursors [to fentanyl]coming in United States, whether or not it’s hostages that foreigners take overseas, whether or not it’s cybersecurity, in terms of victims that we’re seeing in the United States.” 

Wray said that the section was “a vital tool.”

“This country would be reckless at best and dangerous at worst to blind ourselves and not reauthorize the authority in a way that allows us to protect Americans from these foreign threats,” he said. 

]]>
180566
Finance Exec Offers Gabelli Graduate Students Insights on AI, Investment https://now.fordham.edu/fordham-magazine/finance-exec-offers-gabelli-graduate-students-insights-on-ai-investment/ Mon, 08 Jan 2024 21:11:38 +0000 https://news.fordham.sitecare.pro/?p=180538 Veteran financial executive Peter Zangari, Ph.D., FCRH ’89, has some advice for students pursuing graduate degrees in business analytics and information technology, and it may surprise you.

You don’t need to dive headfirst into computer science and programming to succeed in those spaces, he told students in the Gabelli School of Business during a talk at Fordham’s Lincoln Center campus in November.

Zangari retired in early 2023 from his role as global head of research and product development at MSCI after more than 25 years in the finance industry. His retirement was a short one, though: Last month, he was named a partner and head of the Americas at MDOTM, a company that specializes in “AI-driven investment solutions.”

What AI Can—and Can’t—Do

During the student enrichment event, Zangari reflected on his professional experiences and shared insights on data analytics to help students better prepare themselves for careers in the industry. He said technology skills aren’t as critical to long-term success in finance as understanding how to apply technical tools like artificial intelligence.

“In this space, students should do their best to understand how people make investment decisions, and then learn about artificial intelligence—learn about what it can do, and what it is capable of doing—and then apply that to how investors make investment decisions,” he said.

He encouraged students to see AI as a partner, not a substitute for effective portfolio managers, and he said problems may arise when people “think [AI] can solve certain problems, like predicting the future, which I think is really a far-fetched idea.”

A Living Resource

The students in attendance said they were grateful for the opportunity to hear from an industry professional firsthand, peppering him with questions about trends, investment strategy, and his experiences with different employers.

“I’m really interested in finance and tech, and looking to go into that after I finish my master’s,” said Ruth Kissel, who is studying business analytics. “So I wanted to listen to a really experienced professional speak about those same topics.”

The M.S. in business analytics (MSBA) and M.S. in information technology (MSIT) programs are offered by the Gabelli School’s Information, Technology, and Operations area.

In the MSBA program, students learn to integrate analytics techniques, data management, information technology, modeling, and statistical analysis to become more effective analysts and informed users of business data. The MSIT program focuses on systems development, training students to gain the technical skills they need to excel in IT management positions. Grads of the two programs have gone on to work at companies including Amazon, American Express, Deloitte, JPMorgan Chase, and the Metropolitan Transportation Authority.

Zangari, who studied economics at Fordham, said he knows how vital it is for students to have access to alumni and industry professionals, so he spends “as much time as possible being available to students.” He’s an adjunct professor at Drew University in New Jersey, and at Fordham, he’s a member of the President’s Council, a group of successful professionals and philanthropists who are committed to mentoring Fordham’s future leaders, funding key initiatives, and raising the University’s profile.

“I see how the students kind of lean in,” he said. “When you tell a story about your career, you tell a story about your life because, in a nutshell, one’s career is a reflection of life.”

Zangari said that at Fordham, he had an opportunity to learn and work with “people from all different walks of life,” and it was invaluable.

It’s not all about the hard skills, he said. Everyone will have those, but “what makes an employee very attractive is someone who has super-interest in what they’re doing. They’re self-motivated. They’re resourceful.”

]]>
180538