W. “RP” Raghupathi – Fordham Now https://now.fordham.edu The official news site for Fordham University. Thu, 30 Sep 2021 13:50:32 +0000 en-US hourly 1 https://now.fordham.edu/wp-content/uploads/2015/01/favicon.png W. “RP” Raghupathi – Fordham Now https://now.fordham.edu 32 32 232360065 The Promise and Peril of Artificial Intelligence https://now.fordham.edu/politics-and-society/the-promise-and-peril-of-artificial-intelligence/ Thu, 30 Sep 2021 13:50:32 +0000 https://news.fordham.sitecare.pro/?p=153073

The concept of artificial intelligence has been with us since 1955, when a group of researchers first proposed a study of “the simulation of human intelligence processes by machines.” At the same time, it seems like not a day goes by without news about some development, making it feel very futuristic.

It’s also the purview of professors from a variety of fields at Fordham, such as Damian Lyons, Ph.D., a professor of computer science, R.P. Raghupathi Ph.D., a professor of information, technology and operations at Gabelli School of Business, and Lauri Goldkind, Ph.D., a professor at the Graduate School of Social Service.

Listen below:

Full transcript below:

Patrick Verel: Artificial intelligence is many things to many people, on the one hand, the concept has been with us since 1955 when a group of researchers first proposed a study of, “The simulation of human intelligence processes by machines.” At the same time, it seems like there isn’t a day that goes by without news of some new development, making it feel very futuristic. Need to call your pharmacy, a chat bot will answer the call, approaching another car on the highway while in cruise control, don’t worry your car will slow itself down before you plow into it. Just this month, the New York Times reported that an Iranian scientist was assassinated in November by an AI assisted robot with a machine gun.

Damian Lyons
Damian Lyons

Here at Fordham, Damian Lyons is a professor of computer science on the faculty of arts and sciences. R.P. Raghupathi is a professor of information, technology and operations at the Gabelli School of Business. And Lauri Goldkind is a professor at the Graduate School of Social Service. I’m Patrick Verel, and this is Fordham News. 

Dr. Lyons, you’ve been following this field for 40 years and have witnessed some real ebbs and flows in it, why is this time different?

Damian Lyons: Well, the public perception of artificial intelligence has had some real ebbs and flows over the years. And while it’s true that humanity has been trying to create human-like machines almost since we started telling stories about ourselves, many would trace the official birth of AI as a field, to a workshop that occurred at Dartmouth University in the summer of ’56. And it’s interesting that two of the scientists at that workshop had already developed an AI system that could reason symbolically, something which was supposed to be only doable by humans up until then. And while there was some successes with those efforts, by and large AI did not meet the enthusiastic predictions of its proponents, and that brought on what has often been called the AI winter, when its reputation fell dramatically. In the 70s, things started to rise a little bit again, AI began to focus on what are called strong methods. Those are methods that make use of the main specific information rather than general-purpose information to do the reasoning.

So the domain expertise of a human expert could be embodied in a computer program, and that was called an expert system. For example, the MYCIN expert system was able to diagnose blood infections as well as some experts and much better than most junior physicians. So expert systems became among the first commercially successful AI technologies. The AI logistics software that was used in the 1991 Gulf War in a single application was reported to have paid back all the money that the government spent funding AI up until this point. So once again, AI was in the news and they were riding high, but expert systems again, lost their luster in the public eye because of the narrow application possibilities and AI reputation once again deemed, not as bad as before, but it deemed once again. But in the background coming up to the present date, there were two technology trends that were brewing.

The first was the burgeoning availability of big data via the web and the second was the advent of multi-core technology. So both of these together set the scene for the emergence of the latest round in the development of AI, the so-called deep learning systems. So in 2012, a deep learning system, not only surpassed its competitor programs at the task of image recognition but also surpassed human experts at the task of image recognition. And similar techniques were used to build AI systems to defeat the most experienced human players at games such as Go and chess and to autonomously drive 10 million miles on public roads without serious accidents. So once again, predictions about the implications of AI are sky-high.

PV: Now, of all the recent advances, I understand one of the most significant of them is something called AlphaFold. Can you tell me why is it such a big deal?

DL: AlphaFold in my opinion, is a poster child for the use of AI. So biotechnology addresses issues such as cures for disease, for congenital conditions, and maybe even for aging, I’ve got my fingers crossed for that one. So proteins are molecular chains of amino acids, and they’re an essential tool in biotechnology, in trying to construct cures for diseases, congenital conditions, and so forth. And the 3D shape of a protein is closely related to its function, but it’s exceptionally difficult to predict, the combinatorics in predicting the shape are astronomical. So this problem has occupied human attention as a grand challenge in biology for almost 50 years, and up until now, it requires an extensive trial and error approach to lab work and some very expensive machinery in order to do this prediction of shape. But just this summer Google’s DeepMind produced the AlphaFold 2 AI program, and AlphaFold 2 can predict the 3D shape of proteins from their amino acid sequence with higher accuracy, much faster, and obviously much cheaper than experimental methods. This has been held in biology as a stunning breakthrough.

PV: R.P. and Lauri, do you have any thoughts on things that are unsung?

W.P. Raghupathi
W.P. Raghupathi

R.P. Raghupathi: I would just add medicine is a good example, the whole space of medicine, and like Damian mentioned with the image recognition is one of the most successful in radiology. Where now radiologists are able to spend more time at a high level, looking at exception cases that are unusual as opposed to processing thousands and thousands of images, doing the busywork. So that’s been taken out, with a great deal of success. So Neuralink is another example, I’m just excited that we can hopefully solve some of our brain problems, whether through accidents or Parkinson’s or Alzheimer’s with brain implants, chip implants, and that’s terrific progress. I mean, just more recently with drug discovery, extending what Damien said, vaccine development, drug development has accelerated with AI and machine learning. There’s of course, for me, the interest is also just quickly social and public policy and so Lauri will speak to that. I’m just looking at how even being data driven in our decision making in terms of the UN Sustainable Development Goals or poverty elevation or whatever, just looking at the data, analyzing it with AI and deep learning, give us more insight.

Lauri Goldkind: It’s funny R.P. I didn’t know that we were going to go in this direction in particular, but the UN has a research roadmap for a post-COVID world, which hopefully we’ll be in that world soon. But in this research roadmap, it talks a lot about using AI and it also talks about data interoperability and so data sharing at the country level in order to be both meet the sustainable development goals, but also to meet even possibly more pressing need. So pandemic recovery, cities recovering from natural disaster, and it definitely amplifies a need for data interoperability and deploying AI tools for these social good pieces and for using more evidence in policymaking. Because there’s the evidence and there’s advancements and then there’s the policymakers and building a bridge between those two components.

Lauri Goldkind
Lauri Goldkind

PV: Dr. Lyons, you mentioned this notion of talking about the advances for science or being a good thing and a positive thing. I know that there are also fears about AI that veer into the existential realm, on thinking of this notion that robots will become self-aware. And I’m gen X so of course, my frame of reference for everything is the Terminator movies and thinking about Skynet, which comes to life and endangers human existence, as we know it. But there’s also this idea within the field that the concept of silos will make that unlikely or not as likely as maybe people think. Can you explain it a little bit about that?

DL: Yeah, sure. That’s a very good point, Patrick. So games like chess and Go and so forth were an early target of AI applications because there’s an assumption there, there’s an assumption that a human who plays chess well must be intelligent and capable of impressive achievement in other avenues of life. As a matter of fact, you might even argue that the reason humans participate in these kind of games is to sharpen their strategic skills that they can then use to their profit and other commercial or military applications. However, when AI addresses chess, it does so by leveraging what I called previously, these strong methods, so they leverage domain expertise in chess. Despite its very impressive strategy at playing Go, the AlphaGo program from DeepMind, can’t automatically apply the same information to other fields. So for example, it couldn’t turn from playing, Go in the morning to running a multinational company effectively in the afternoon, as a human might, we learn skills which we can apply to other domains, that’s not the case with AI.

AI tools are siloed and I think an excellent warning case for all of us is IBM’s Watson. Where is Watson? Watson is a warning for hubris, I think in this regard, it has not remade the fortune of IBM or accomplished any of the great tasks foretold, they’ve tuned down their expectations, I believe in IBM and there are applications for which a technology such as Watson could be well used and profitable, but it was custom built for a quiz show, so it’s not going to do anything else very easily. AI tools and systems are still developed in domain silos, so I don’t believe that the sentient AI scenario is an imminent press. However, the domain-specific AI tools that we have developed could still be misused, so I believe the solution is educating the developers and designers of these systems to understand the social implications of the field. So we can ensure that the systems that are produced are safe and trustworthy and used in the public good.

PV: Dr. Raghupathi, now I know robots long ago replaced a lot of blue-collar jobs, I’m thinking for instance of car assembly lines, now I understand they’re coming for white-collar jobs as well. In 2019, for instance, a major multinational bank announced that as part of the plan to lay off 18,000 workers, it would turn to an army of robots as it were, what has changed?

RP: So I just go back to what Damien mentioned in the beginning. I mean, two trends have impacted organizations and businesses in general. So one is the rapid advances in hardware technologies, both storage as well as speed, so those have enabled us to do more complex and sophisticated things. And number two is the data, which also he mentioned, that all of a sudden corporations have found they’re sitting on mountains of data and they could actually use it with all this computing power. So those two trends confluence together to make it an ideal situation where companies are now using AI and other techniques to automate various processes. It is slow and we have a lot to learn because we don’t know how to handle displacement and layoffs and so on, so companies have started with basic robotic process automation, first automating routine and repetitive tasks. But we also see now more advanced work going on, like in the example you mentioned that banks, trading companies, hedge funds are using automated trading, algorithmic trading, that’s all machine learning and deep learning. So those are replacing traders.

PV: What kind of jobs do you think are going to be the most affected by AI going forward?

RP: Well, all at both ends, we know that the routine, for example, in a hospital admissions process or security checks or insurance crossing, all of those, any data-driven is already automated. And then from the prior examples, now you call your insurance company for good or bad, you’re going to go through this endless loop of automated voice recognition systems. Now the design of those is lacking quite a bit in terms of training them on different accents, they never understand my accent. So I just hit the zero button like five times and then I will have a human at the other end or I would say, blah, blah, blah and the system gets it and really it works.

Then we have now the more advanced, and so the financial trading is an example, but also in healthcare, the diagnosis, the diagnostic decision making like the example that was mentioned, reading MRI images and CT scan images and x-rays, that’s pretty advanced work by radiologists. And now the deep learning systems have taken over and they’re doing an excellent job and then the radiologists are there to supervise, keep an eye on outliers and exceptions for them.

PV: I’m glad to hear that I’m not the only one who, when I get an automated voice on the other end of the line that I just hit zero, just say, “Talk to a person, talk to a person, talk to a person.”

RP: Try blah, blah, blah, it works better, to cut to the chase.

LG: Even in my field in social work, automation, and chat is beginning to take over jobs. And so I’m working with a community partner, that’s using a chatbot as a coach for motivational interviewing, which is an evidence-based practice. And one of the challenges in evidence-based practices is how faithful the worker is to implementing the strategy of the practice. And we’re now seeing, instead of having a human coach to do technical assistance on implementing a particular practice, agencies are turning to chat because it’s efficient. So if I don’t have to pay a human coach, I can train three more workers using this chat strategy. And so we think in these highly professionalized settings that people have job security and job safety versus automation and that’s actually just not the case anymore.

PV: What implications do these advancements have for other countries?

DL: I think there are developed countries and undeveloped countries, one potential advantage that AI holds for the future is in my own area of research, which is the applications of AI and robotics. And that’s the area of what’s called precision agriculture, so the idea being that rather than spraying large areas with pesticides or covering areas with fertilizer, you use AI technology and the embodiment of AI technology in ground robots and robot drones to target specific areas, specific spatial areas. So that if you’ve got pests growing on a particular line of tomato plants or coffee plants, then you can target your pesticide to just those areas. You can even use mechanical means to pull up weeds just as people do rather than flying a plane overhead and spraying all kinds of nasty pesticides and other stuff which ruin the environment.

LG: I was thinking on the more positive side, the use of chat technologies in mental health and whole language processing in mental health and things like avatar therapy, in scenarios where there are no providers, the AI has a real possibility of benefit in order to serve people who might not otherwise be served. And so there’s a growing understanding that depression and social connection and wellbeing are interrelated and are mental health challenges that are certainly related to climate change and future work and all those other pieces. But one way to meet that growing mental health need is to use artificial intelligence to deliver services. And so on the positive side, I think there’s an opportunity to grow AI strategies in mental health.

RP: I think Patrick, some of these implications are not just for developing other countries, but even our country and the developed countries. I mean, take the retraining of the workforce that was alluded to, we don’t have any for even the transfer to clean technologies from the coal mines. I mean, what are those people going to do if we shut down the coal mines? Are we training them in the manufacture and use of advanced energy technologies? And likewise in the last election, there were some talk, Andrew Yang and others have had universal income, a lot of research is going on about it, the cost-benefit analysis, so some kind of safety net, some social policy as we handle this transition to an automated workforce is needed.

LG: I mean, let’s be really clear, the reason that Silicon Valley is interested in a universal basic income is because there’s a dramatic understanding about what the future of employment is going to look like. And as in the US is a global North country and we have a very strong ethos about work and a work identity. And when there are no jobs, it’s going to be really challenging even for traditional middle-class jobs to figure out their role with regard to working alongside AI.

PV: Now, Dr. Goldkind, this summer, you wrote a paper actually, and you said that social work must claim a place in the AI design and development, working to ensure that AI mechanisms are created, imagined and implemented to be congruent with ethical and just practice. Are you worried that your field is not as involved in decisions about AI development as it should be?

LG: I think that we have some catching up to do and I think that we have some deep thinking to do about how we can include content like AI and automated decision making and robotics and generalized intelligence versus specialized intelligence in AI into our curricula. And to Damien’s earlier point, I think that the same way that our engineering students should be trained with an ethical lens or minimally, a lens on who might be an end user of some of these tools and what those implications might be, that social work students and prospective social work professionals should also have a similar understanding of the consequences of AI use and AI mechanisms. And so I think that there’s a lot of room for growth in my discipline to catch up and to also be partners in how these systems are developed. Because social work is bringing this particular lens of an ecosystem model and a person in an environment approach and a respect for human dignity.

And by no means suggesting that a business student or a computer science student is not as un-respective of human dignity, but in social work, we have a set of core values that folks are opting into. And we are not, I think, preparing students to be critical about these issues and think deeply about the implications of when they’re seeing a client who’s been accessed by an AI or a robot, what are the tools and strategies we might use to help that person be synthesized back into their community in a way that’s meaningful, on one hand. And on the other hand in the AI world, there’s a huge conversation about fairness, accountability, and transparency, and ethics in AI, and social work has a code of ethics and has a long history of applying those codes. And so could be a real value add to the design and development process.

PV: Yeah. I feel like when we talked before this, you mentioned this idea of having graduates getting used to this idea of working alongside AI, not necessarily being replaced by it. Can you talk a little bit about that?

LG: Sure. I think the idea about AI augmentation rather than AI automation is whereas these pieces are evolving is where it seems to be headed. And I think it would be useful for us as social work educators to think about how are we helping our students become comfortable with an augmented practice that uses an AI in a positive light? And so, for example, in diagnosis, in the mental health world, AI can make a more accurate assessment than a human can, because the AI is built as to our peace point earlier about radiology, the AI is trained to do this one specific thing. And so similarly in mental health, it would be great if we were teaching students about how these tools can be deployed so they can work on higher-order decision making or alternative planning and strategies and use the AI in a complementary fashion as opposed to being just completely automated.

PV: I think about jobs, so much of this conversation revolves around jobs and oh, I’m going to lose my job to a robot. And in your field, it seems like that is never going to be the case because there’s such a huge demand for mental health services, that there’s no way the robots can physically replace all the people.

RP: Social services can be delivered, again, more effectively with now the AI, the technologies, but also the data-driven approaches. I mean, every agency is swamped with cases and workloads, sometimes it’s taking years to resolve whether it’s placing a child in a foster home or whatever. So I think these technologies will help process the data faster and more effectively and give that information, the insight to the counselors, to the managers, to the caseworkers. And so they could spend more time dealing with the high-level issues than with paper pushing or processing data, so there is really great benefit over there, again, to at least automate some of the routine and repetitive parts.

LG: Oh, absolutely. And also in terms of automated decision making and even operations research and bringing some of those strategies from predictive analytics and exploratory data analysis into mental health providers, or community health providers and other providers of human services. Where we could deploy resources in a really strategic way that the agencies don’t have the capacity to do in human decision making and AI or a good algorithm can make really efficient use of this data that people are already collecting.

DL: I just want to chime in on that. That’s such an interesting discussion and I guess I feel a little out of place because I’m going to say something I normally don’t say, which is that now you’re making me very worried about the application of AI. So we already know that there are lots of issues in the way people develop AI systems, engineers or computer scientists developing the systems don’t always take a great deal of care to ensure that their data is necessarily well-curated or represented from a public good perspective. But now if we’re going to use those systems to help to counsel, to interact with vulnerable humans, then there’s a tremendous opportunity for misuse, corruption, accidental mistake. So I’m a little worried. I think we have to be really careful if we do something like that, and I’m not saying that there isn’t an opportunity there, but I’m saying that that’s a case where the implications of the use of AI are pretty dramatic even with the current state of AI. So we probably want to be very careful how we do that.

LG: In a perfect world, I would have my social work students cross-trained with your CS students, because I do think that there’s a real value to having those interdisciplinary conversations where people become aware of unintended consequences, or possible biases that can be embedded in data and what that means for a particular application. But I also want to just note that the same way the universal basic income has been discussed as a bomb for future work type issues, predictive analytics, and automated decision making is in place in social services. And so it’s being used and not even tested, but really used in triaging cases in child welfare, as one could imagine, not without controversy. Allegheny County is probably the most developed county there in Pennsylvania, who’ve deployed automated decision-making to help triage cases of child welfare abuse and neglect. And it’s really another case of decision-making to support human workers, not supplanting human workers.

PV: Have any specific innovations in the field made you optimistic?

DL: Can you define what you mean by optimistic? So for example, if sentient AI was developed tomorrow, I’d be over the moon, I would think this would be great, but I would think that other people would say that this was the worst thing that could happen. So maybe you need to be a little more specific about what optimism means in this case.

PV: I guess the way I’m thinking about it is, when you think about the advances that we’ve made so far, and you see where things are going, in general, what do you feel is going to be the most positive thing we’ll be seeing?

RP: Medicine is I think one area, I mean, just so fascinating, the fact that we can give back people some of their lives in terms of Parkinson’s or Alzheimer’s as a result of wars and strokes. And then combined with what Damien said about the biological aspect, decoding proteins, et cetera, it’s just, so drug discovery of solving health and medical problems, I think is one area, it’s just outstanding and then stunning, I would continue to follow that one.

LG: I also think in robotics specifically, which is underneath the broad umbrella of AI, there’s some real advances in caregiving. And I think that that has broad application as we’re an aging society and not just in the US, but internationally with not enough caregivers to offer support and daily living skills and daily living support to older adults in facilities and out, and keeping people in their homes. There’s so many advances to support independent living for older persons that will be automated, from caregiving robots, to smart homes and internet of things advances, that use the big data we’ve been talking about to help support somebody be independent in a community. And I think that those pieces show significant promise in a way that humans won’t be able to catch up fast enough.

RP: I must add to that. I mean, I’ve been following the remote monitoring of senior citizens experiments done in various countries. We are a little behind, but Japan has been just so way ahead, 20 years ahead, that once a picture of this wonderful old lady, 85 years old sitting in a bathing machine, like a washing machine, and she was going through all the cycles and the article stopped it when she got into the spin cycle, you probably need an attendant to switch it off.

DL: One of the things that does make me feel good about the progress of AI in societies, that there’s been already attention to understanding the restrictions that need to be placed in AI. For example, winding back to one of the very first examples you gave in this talk, Patrick, lethal autonomous weapons. So there’s been a number of attempts and conferences and meetings to understand how we’re going to deal with this issue of legal autonomous weapons. There have been organizations such as the Future of Life and its objective is to understand how technologies such as AI, which present an existential threat to human lives could be dealt with and could be used effectively, but constrained enough, and early enough, constrained early enough, that it was useful.

So with AI, I think we’re at that point, we can talk about trying to get folks to sign on to a lethal autonomous weapons pledge, which the Future of Life organization is trying to do. Or at least understand what the issues involved are and ensure that everybody understands that now before the lethal autonomous weapons are really at a serious stage, where we can no longer control the genie, it’s out of the bottle at that point. So that’s something that makes me feel optimistic.

]]>
153073
Technology Summit Seeks to Boost Bronx Tech Initiatives https://now.fordham.edu/campus-locations/rose-hill/technology-summit-seeks-to-boost-bronx-tech-initiatives/ Fri, 09 Oct 2015 13:00:00 +0000 http://news.fordham.sitecare.pro/?p=29153 Next week, local business owners, government officials, and academics will gather at Fordham’s Gabelli School of Business for the common purpose of bettering the Bronx.

The annual Bronx Summit on Technology Innovation and Start Ups explores the opportunities and challenges underlying the Bronx’s potential for technology-based innovation and startup activity. Sponsored by the Center for Digital Transformation, the summit—which is now in its fourth year—focuses on leveraging existing resources in the borough to promote economic development.

“The Bronx has good infrastructure, it’s relatively low-cost, and yet nobody focuses that much on it,” said the center’s director Wullianallur “RP” Raghupathi, PhD, professor of information systems.

“Through these conferences, we want to build up the skills and the knowledge base we already have here to promote economic and technological development… and make it attractive for entrepreneurial and business activities.”

The summit is free and open to the public, but RSVP is required.

This year’s theme, “Opportunistic Growth for the Bronx in Technology: Next Step—Is the Bronx Up to the Challenge?”, pays special attention to health care technology. Speakers and panelists will discuss innovative solutions such as hosting health hackathons in which students and other programmers collaborate on building mobile applications.

Examples of what could arise from a health hackathon are remote monitoring for diabetics and “telemedicine” web conferencing for doctors and patients, Raghupathi said. But first the borough must tap into the brainpower within its borders.

“We have all these institutions, colleges, and this support from the borough president’s office as well as private entities,” he said, referencing Bronx Community College, St. Barnabas and Montefiore hospitals, the Bronx Science Consortium, and the South Bronx Development Corporation, among others. “We felt that we needed to act as an interface among these various stakeholders.”

Fordham presenters include Rosemary Wakeman, PhD, director of the urban studies program; Nisha Mistry, director of the Urban Law Center; and Carey Weiss, sustainability initiatives coordinator for the Social Innovation Collaboratory.

The summit is co-sponsored by Fordham’s urban studies program, the Urban Law Center, and the Bronx Technology Innovation Coalition (BITC).

For more information, contact Raghupathi or Center for Digital Transformation senior fellow Teresita Abay-Krueger.

]]>
29153