AI systems - AI-Tech Park https://ai-techpark.com AI, ML, IoT, Cybersecurity News & Trend Analysis, Interviews Mon, 08 Jul 2024 05:07:26 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.16 https://ai-techpark.com/wp-content/uploads/2017/11/cropped-ai_fav-32x32.png AI systems - AI-Tech Park https://ai-techpark.com 32 32 AITech Interview with Joel Rennich, VP of Product Strategy at JumpCloud https://ai-techpark.com/aitech-interview-with-joel-rennich/ Tue, 02 Jul 2024 13:30:00 +0000 https://ai-techpark.com/?p=171580 Learn how AI influences identity management in SMEs, balancing security advancements with ethical concerns. Joel, how have the unique challenges faced by small and medium-sized enterprises influenced their adoption of AI in identity management and security practices? So we commission a biannual small to medium-sized enterprise (SME) IT Trends Report...

The post AITech Interview with Joel Rennich, VP of Product Strategy at JumpCloud first appeared on AI-Tech Park.

]]>
Learn how AI influences identity management in SMEs, balancing security advancements with ethical concerns.

Joel, how have the unique challenges faced by small and medium-sized enterprises influenced their adoption of AI in identity management and security practices?

So we commission a biannual small to medium-sized enterprise (SME) IT Trends Report that looks specifically at the state of SME IT. This most recent version shows how quickly AI has impacted identity management and highlights that SMEs are kind of ambivalent as they look at AI. IT admins are excited and aggressively preparing for it—but they also have significant concerns about AI’s impact. For example, nearly 80% say that AI will be a net positive for their organization, 20% believe their organizations are moving too slowly concerning AI initiatives, and 62% already have AI policies in place, which is pretty remarkable considering all that IT teams at SMEs have to manage. But SMEs are also pretty wary about AI in other areas. Nearly six in ten (62%) agree that AI is outpacing their organization’s ability to protect against threats and nearly half (45%) agree they’re worried about AI’s impact on their job. I think this ambivalence reflects the challenges of SMEs evaluating and adopting AI initiatives – with smaller teams and smaller budgets, SMEs don’t have the budget, training, and staff their enterprise counterparts have. But I think it’s not unique to SMEs. Until AI matures a little bit, I think that AI can feel more like a distraction.

Considering your background in identity, what critical considerations should SMEs prioritize to protect identity in an era dominated by AI advancements?

I think caution is probably the key consideration. A couple of suggestions for getting started:

Data security and privacy should be the foundation of any initiative. Put in place robust data protection measures to safeguard against breaches like encryption, secure access controls, and regular security audits. Also, make sure you’re adhering to existing data protection regulations like GDPR and keep abreast of impending regulations in case new controls need to be implemented to avoid penalties and legal issues.

When integrating AI solutions, make sure they’re from reputable sources and are secure by design. Conduct thorough risk assessments and evaluate their data handling practices and security measures. And for firms working more actively with AI, research and use legal and technical measures to protect your innovations, like patents or trademarks.

With AI, it’s even more important to use advanced identity and authentication management (IAM) solutions so that only authorized individuals have access to sensitive data. Multi-factor authentication (MFA), biometric verification, and role-based access controls can significantly reduce that risk. Continuous monitoring systems can help identify and thwart AI-related risks in real time, and having an incident response plan in place can help mitigate any security breaches. 

Lastly, but perhaps most importantly, make sure that the AI technologies are used ethically, respecting privacy rights and avoiding bias. Developing an ethical AI framework can guide your decision-making process. Train employees on the importance of data privacy, recognizing phishing attacks, and secure handling of information. And be prepared to regularly update (and communicate!) security practices given the evolving nature of AI threats.

AI introduces both promises and risks for identity management and overall security. How do you see organizations effectively navigating this balance in the age of AI, particularly in the context of small to medium-sized enterprises?

First off, integrating AI has to involve more than just buzzwords – and I’d say that we still need to wait until AI accuracy is better before SMEs undertake too many AI initiatives. But at the core, teams should take a step back and ask, “Where can AI make a difference in our operations?” Maybe it’s enhancing customer service, automating compliance processes, or beefing up security. Before going all in, it’s wise to test the waters with pilot projects to get a real feel of any potential downstream impacts without overcommitting resources.

Building a security-first culture—this is huge. It’s not just the IT team’s job to keep things secure; it’s everybody’s business. From the C-suite to the newest hire, SMEs should seek to create an environment where everyone is aware of the importance of security, understands the potential threats, and knows how to handle them. And yes, this includes understanding the role of AI in security, because AI can be both a shield and a sword.

AI for security is promising as it’s on another level when it comes to spotting threats, analyzing behavior, and monitoring systems in real time. It can catch things humans might miss, but again, it’s VITAL to ensure the AI tools themselves are built and used ethically. AI for compliance also shows a lot of promise. It can help SMEs stay on top of regulations like GDPR or CCPA to avoid fines but also to build trust and reputation. 

Because there are a lot of known unknowns around AI, industry groups can be a good source for information sharing and collaboration. There’s wisdom and a strength in numbers and a real benefit in shared knowledge. It’s about being strategic, inclusive, ethical, and always on your toes. It’s a journey, but with the right approach, the rewards can far outweigh the risks.

Given the challenges in identity management across devices, networks, and applications, what practical advice can you offer for organizations looking to leverage AI’s strengths while addressing its limitations, especially in the context of password systems and biometric technologies?

It’s a surprise to exactly no one that passwords are often the weakest security link. We’ve talked about ridding ourselves of passwords for decades, yet they live on. In fact, our recent report just found that 83% of organizations use passwords for at least some of their IT resources. So I think admins in SMEs know well that despite industry hype around full passwordless authentication, the best we can do for now is to have a system to manage them as securely as possible. In this area, AI offers a lot. Adaptive authentication—powered by AI—can significantly improve an org’s security posture. AI can analyze things like login behavior patterns, geo-location data, and even the type of device being used. So, if there’s a login attempt that deviates from the norm, AI can flag it and trigger additional verification steps or step-up authentication. Adding dynamic layers of security that adapt based on context is far more robust than static passwords.

Biometric technologies offer a unique, nearly unforgeable means of identification, whether through fingerprints, facial recognition, or even voice patterns. Integrating AI with biometrics makes them much more precise because AI algorithms can process complex biometric data quickly, improve the accuracy of identity verification processes, and reduce the chances of both false rejections and false acceptances. Behavioral biometrics can analyze typing patterns, mouse or keypad movements, and navigation patterns within an app for better security. AI systems can be trained to detect pattern deviations and flag potential security threats in real time. The technical challenge here is to balance sensitivity and specificity—minimizing false alarms while ensuring genuine threats are promptly identified.

A best practice with biometrics is to employ end-to-end encryption for biometric data, both at rest and in transit. Implement privacy-preserving techniques like template protection methods, which convert biometric data into a secure format that protects against data breaches and ensures that the original biometric data cannot be reconstructed.

AI and biometric technologies are constantly evolving, so it’s necessary to keep your systems updated with the latest patches and software updates. 

How has the concept of “identity” evolved in today’s IT environment with the influence of AI, and what aspects of identity management have remained unchanged?

Traditionally, identity in the workplace was very much tied to physical locations and specific devices. You had workstations, and identity was about logging into a central network from these fixed points. It was a simpler time when the perimeter of security was the office itself. You knew exactly where data lived, who had access, and how that access was granted and monitored.

Now it’s a whole different ballgame. This is actually at the core of what JumpCloud does. Our open directory platform was created to securely connect users to whatever resources they need, no matter where they are. In 2024, identity is significantly more fluid and device-centered. Post-pandemic, and with the rise of mobile technology, cloud computing, and now the integration of AI, identities are no longer tethered to a single location or device. SMEs need for employees to be able to access corporate resources from anywhere, at any time, using a combination of different devices and operating systems—Windows, macOS, Linux, iOS, Android. This shift necessitates a move from a traditional, perimeter-based security model to what’s often referred to as a zero-trust model, where every access transaction needs to have its own perimeter drawn around it. 

In this new landscape, AI can vastly improve identity management in terms of data capture and analysis for contextual approaches to identity verification. As I mentioned, AI can consider the time of access, the location, the device, and even the behavior of the user to make real-time decisions about the legitimacy of an access request. This level of granularity and adaptiveness in managing access wasn’t possible in the past.

However, some parts of identity management have stayed the same. The core principles of authentication, authorization, and accountability still apply. We’re still asking the fundamental questions: “Are you who you say you are?” (authentication), “What are you allowed to do?” (authorization), and “Can we account for your actions?” (accountability). What has changed is how we answer these questions. We’re in the process of moving from static passwords and fixed access controls to more dynamic, context-aware systems enabled by AI.

In terms of identity processes and applications, what is the current role of AI for organizations, and how do you anticipate this evolving over the next 12 months?

We’re still a long away from the Skynet-type AI future that we’ve all associated with AI since the Terminator. For SMEs, AI accelerates a shift away from traditional IT management to an approach that’s more predictive and data-centric. At the core of this shift is AI’s ability to sift through vast, disparate data sets, identifying patterns, predicting trends, and, from an identity management standpoint, its power is in preempting security breaches and fraudulent activities. It’s tricky though, because you have to balance promise and risk, like legitimate concerns about data governance and the protection of personally identifiable information (PII). Tapping AI’s capabilities needs to ensure that we’re not overstepping ethical boundaries or compromising on data privacy. Go slow, and be intentional.

Robust data management frameworks that comply with evolving regulatory standards can protect the integrity and privacy of sensitive information. But keep in mind that no matter the benefit of AI automating processes, there’s a critical need for human oversight. The reality is that AI, at least in its current form, is best utilized to augment human decision-making, not replace it. As AI systems grow more sophisticated, organizations will require workers with  specialized skills and competencies in areas like machine learning, data science, and AI ethics.

Over the next 12 months, I anticipate we’ll see organizations doubling down on these efforts to balance automation with ethical consideration and human judgment. SMEs will likely focus on designing and implementing workflows that blend AI-driven efficiencies with human insight but they’ll have to be realistic based on available budget, hours, and talent. And I think we’ll see an increase in the push towards upskilling existing personnel and recruiting specialized talent. 

For IT teams, I think AI will get them closer to eliminating tool sprawl and help centralize identity management, which is something we consistently hear that they want. 

When developing AI initiatives, what critical ethical considerations should organizations be aware of, and how do you envision governing these considerations in the near future?

As AI systems process vast amounts of data, organizations must ensure these operations align with stringent privacy standards and don’t compromise data integrity. Organizations should foster a culture of AI literacy to help teams set realistic and measurable goals, and ensure everyone in the organization understands both the potential and the limitations of AI technologies.

Organizations will need to develop more integrated and comprehensive governance policies around AI ethics that address:

How will AI impact our data governance and privacy policies? 

What are the societal impacts of our AI deployments? 

What components should an effective AI policy include, and who should be responsible for managing oversight to ensure ethical and secure AI practices?

Though AI is evolving rapidly, there are solid efforts from regulatory bodies to establish frameworks, working toward regulations for the entire industry. The White House’s National AI Research and Development Strategic Plan is one such example, and businesses can glean quite a bit from that. Internally, I’d say it’s a shared responsibility. CIOs and CTOs can manage the organization’s policy and ethical standards, Data Protection Officers (DPOs) can oversee compliance with privacy laws, and ethics committees or councils can offer multidisciplinary oversight. I think we’ll also see a move toward involving more external auditors who bring transparency and objectivity.

In the scenario of data collection and processing, how should companies approach these aspects in the context of AI, and what safeguards do you recommend to ensure privacy and security?

The Open Worldwide Application Security Project (OWASP) has a pretty exhaustive list and guidelines. For a guiding principle, I’d say be smart and be cautious. Only gather data you really need, tell people what you’re collecting, why you’re collecting it, and make sure they’re okay with it. 

Keeping data safe is non-negotiable. Security audits are important to catch any issues early. If something does go wrong, have a plan ready to fix things fast. It’s about being prepared, transparent, and responsible. By sticking to these principles, companies can navigate the complex world of AI with confidence.

Joel Rennich

VP of Product Strategy at JumpCloud 

Joel Rennich is the VP of Product Strategy at JumpCloud residing in the greater Minneapolis, MN area. He focuses primarily on the intersection of identity, users and the devices that they use. While Joel has spent most of his professional career focused on Apple products, at JumpCloud he leads a team focused on device identity across all vendors. Prior to JumpCloud Joel was a director at Jamf helping to make Jamf Connect and other authentication products. In 2018 Jamf acquired Joel’s startup, Orchard & Grove, which is where Joel developed the widely-used open source software NoMAD. Installed on over one million Macs across the globe, NoMAD allows macOS users to get all the benefits of Active Directory without having to be bound to them. Joel also developed other open source software at Orchard & Grove such as DEPNotify and NoMAD Login. Over the years Joel has been a frequent speaker at a number of conferences including WWDC, MacSysAdmin, MacADUK, Penn State MacAdmins Conference, Objective by the Sea, FIDO Authenticate and others in addition to user groups everywhere. Joel spent over a decade working at Apple in Enterprise Sales and started the website afp548.com which was the mainstay of Apple system administrator education during the early years of macOS X.

The post AITech Interview with Joel Rennich, VP of Product Strategy at JumpCloud first appeared on AI-Tech Park.

]]>
The Importance of AI Ethics https://ai-techpark.com/the-importance-of-ai-ethics/ Thu, 27 Apr 2023 13:00:00 +0000 https://ai-techpark.com/?p=118045 Explore the role of AI Ethics in developing trustworthy AI. Learn about the risks of unethical AI and the importance of transparency, fairness, and accountability.

The post The Importance of AI Ethics first appeared on AI-Tech Park.

]]>
Explore the role of AI Ethics in developing trustworthy AI. Learn about the risks of unethical AI and the importance of transparency, fairness, and accountability.

Artificial intelligence (AI) is a rapidly advancing technology that is being integrated into various industries, including healthcare, finance, education, and transportation, among others. AI has the potential to revolutionize these fields by automating tasks, providing valuable insights, and making informed decisions. However, the widespread adoption of AI has also raised concerns about the ethical implications of its use.

AI systems can make decisions that affect individuals, groups, and society as a whole. These decisions may involve sensitive information, such as personal data, medical records, or financial information. Therefore, it is crucial to ensure that AI systems are developed and used in an ethical and responsible manner. This is where AI Ethics comes into play.

AI Ethics is a set of principles, guidelines, and values that aim to ensure the responsible and ethical development and use of AI. It covers various aspects, including transparency, fairness, accountability, privacy, and security, among others. The goal of AI Ethics is to create trustworthy AI systems that are safe, reliable, and beneficial to humanity.

In this article, we will explore why AI Ethics is critical for developing trustworthy AI. We will discuss the potential risks and consequences of unethical AI, such as bias, discrimination, and harm to individuals and society. We will also delve into the role of AI Ethics in ensuring fairness, transparency, and accountability in AI systems. Finally, we will look at various frameworks and guidelines available to guide ethical AI development.

The Risks and Consequences of Unethical AI

The development and use of AI systems without ethical considerations can lead to several potential risks and consequences. One of the significant risks is bias and discrimination. AI systems learn from data, and if the data is biased, the AI system can perpetuate that bias. For example, facial recognition systems have been found to have higher error rates for women and people with darker skin tones. Similarly, hiring algorithms have been shown to discriminate against certain groups based on their gender, ethnicity, or age.

Another risk is the potential harm caused by AI systems. For example, autonomous vehicles have the potential to save lives by reducing accidents caused by human error. However, if these vehicles are not designed and tested properly, they can cause accidents and harm to individuals and society. Similarly, AI systems used in healthcare must be accurate and reliable to prevent misdiagnosis and incorrect treatment.

Finally, the lack of transparency and accountability in AI systems can lead to mistrust and decreased adoption. Individuals and society as a whole need to understand how AI systems make decisions and why they make them. If AI systems are opaque and difficult to understand, individuals may be hesitant to use them or rely on them.

The Role of AI Ethics in Ensuring Fairness, Transparency, and Accountability

AI Ethics can play a significant role in mitigating these risks and ensuring that AI systems are developed and used in a responsible and ethical manner. One of the primary goals of AI Ethics is to ensure fairness and prevent bias and discrimination. AI systems must be designed to be inclusive and equitable and should not perpetuate biases present in the data.

Transparency is another critical aspect of AI Ethics. AI systems should be explainable, and individuals should understand how the system makes decisions. This transparency allows individuals to hold the developers and users of AI systems accountable for their decisions and actions.

Accountability is also a key aspect of AI Ethics. Developers and users of AI systems must take responsibility for the decisions made by the system. This accountability ensures that individuals and society are protected from potential harm caused by AI systems.

Frameworks and Guidelines for Ethical AI Development

Various frameworks and guidelines have been developed to guide ethical AI development. For example, the IEEE Global Initiative for Ethical Considerations in AI and Autonomous Systems has developed a set of ethical principles for AI. These principles include transparency, accountability, and ensuring that AI systems are designed to be inclusive and equitable.

Similarly, the European Union’s General Data Protection Regulation (GDPR) includes provisions for the ethical use of AI. The GDPR requires that individuals be informed of the processing of their personal data and that decisions made by AI systems that affect individuals be explainable.

Wrapping Up

Thus, in conclusion, the development and use of AI systems must be guided by ethical considerations. AI Ethics is essential for creating trustworthy AI systems that are safe, reliable, and beneficial to humanity. By mitigating the potential risks and consequences of unethical AI, AI Ethics can help ensure that AI systems are developed and used in a responsible and ethical manner. The frameworks and guidelines available can guide ethical AI development and ensure that individuals and society are protected from potential harm caused by AI systems.

Visit AITechPark for cutting-edge Tech Trends around AI, ML, and Cybersecurity, along with AITech News, and timely updates from industry professionals!

The post The Importance of AI Ethics first appeared on AI-Tech Park.

]]>
AITech Interview with Dr. Sanjay Rajagopalan, Chief Design & Strategy Officer at Vianai Systems https://ai-techpark.com/aitech-interview-with-dr-sanjay-rajagopalan/ Wed, 08 Mar 2023 13:30:00 +0000 https://ai-techpark.com/?p=111660 Prioritizing curiosity, customer success, and continuous learning to human-centered design and advanced AI drives transformations.

The post AITech Interview with Dr. Sanjay Rajagopalan, Chief Design & Strategy Officer at Vianai Systems first appeared on AI-Tech Park.

]]>
Prioritizing curiosity, customer success, and continuous learning to human-centered design and advanced AI drives transformations.

Curiosity gets one to ask several questions and further solve big problems. And these are the fundamental values on which Vianai Systems is built. Believing in customers’ success, the team is never off-focus and is in continuous pursuit of learning. It brings together human-centered design and advanced AI techniques, to help its customers drive breakthrough transformations. To learn how the organization is accelerating AI adoption we interviewed its Chief Design & Strategy Officer– Dr. Sanjay Rajagopalan. 

Below are the interview highlights: 

1. Kindly brief us about yourself and your journey as the Chief Design & Strategy Officer at Vianai Systems.

I have extensive experience and background in human-centered design as it relates to the enterprise – in products, processes, organizations, and services. As Chief Design & Strategy Officer at Vianai Systems, I head up our corporate design team and drive both customer and special projects. Before Vianai, I was the SVP and Head of Design & Research at Infosys, where I worked on strategic design and innovation initiatives internally and with dozens of customers, and I helped with training over 150K Infosys employees on concepts of design thinking, human-centered design, and innovation. Previously as the SVP of Design and Special Projects at SAP, I led 30+ corporate innovation initiatives. I hold a Ph.D. in manufacturing and design from Stanford University.

2. Tell us your source of inspiration for venturing into the field of human-centered design.

My technical field, prior to arriving at the Stanford ME Design program, was in the area of Computer Aided Design. My job and training involved the use of advanced CAD tools to model, simulate, analyze, and design automotive components like transmissions for heavy earth-moving equipment. At Stanford, I was exposed to a whole new way of thinking about design in the context of human-machine interactions. I was there in the early days when concepts on design methodology, HCI, design thinking and new paradigms for manufacturing and collaboration were forming. I was heavily influenced by the ongoing work at the Product Design program and the Center for Design Research at Stanford, which was the crucible in which the Stanford d.school was forged. I brought those ideas into the companies that I worked at subsequently which, being situated in the Bay Area, ended up being software and technology companies, specifically enterprise software companies. I was an early member of the newly formed Design Services Team at SAP – and worked there with Dr. Vishal Sikka and Dr. Hasso Plattner, who were instrumental in the formation of the Stanford d.school (also called The Hasso Plattner Institute of Design at Stanford). Many of the top design leaders in the industry, including at places like Facebook/Meta, Google, Twitter, ServiceNow, JP Morgan, Infosys, Cisco, and many other leading companies around the world worked together during those days at the DST at SAP.

3. Brief us about Vianai Systems and give us an overview of how it is humanizing AI for enterprises. 

Vianai Systems is a Palo Alto, California-based Human-Centered AI platform and products company launched in 2019 to address the unfulfilled promises of enterprise AI. 

Human-centered AI refers to scenarios where humans work closely with AI systems, augmenting and amplifying their capabilities. It serves as the main focus in the design of our products at Vianai. We work to bring the power of human understanding – like judgment and collaboration – together with the best data and AI techniques, to create intelligent systems that can greatly improve business outcomes and processes. We make monitoring and continuous operations tools, which help enterprises running a large number of models with high inference throughput track the performance of their models, and also diagnose and fix problems quickly. Our teams at Vianai have developed, and are continuing to develop, advanced tools and techniques to help enterprises implement safeguards that promote the responsible use of AI/ML – including in the incorporation of the very exciting advances in large language models, such as ChatGPT and Bard, into the mainstream of their businesses, but in a reliable, responsible and trustworthy manner. 

4. What are the core values on which the organization is built and what is the mission of the organization?

Vianai’s mission is to bring human-centered AI to life in enterprises worldwide, and help companies realize the full potential of their AI investments. While AI has made some notable recent advancements, the significant potential of bringing the power of human understanding, judgment, and collaboration together with data and the best AI techniques remains untapped. We believe in a future where the most valuable use of AI technology will be as a partner or co-pilot that enhances, improves, augments, and amplifies the capability of humans. Vianai was founded to bring this unfulfilled potential to the enterprise at scale. We refer to this as a “world full of life and intelligence,” meaning that all life on the planet will benefit from human and artificial intelligence working in partnership.

5.Being a thought leader, how do you strategize to bring to light Vianai Systems’ mission and vision?

At any startup, and especially in an area where the technology landscape is evolving very rapidly, driving company and product strategy is a real-time, dynamic, active and relentless challenge. The key to doing it effectively is something that our CEO Dr. Vishal Sikka has called “zero distance.”  Zero distance refers to the key decision makers at the company being hands-on with three critical aspects of the company – the end-user/customer (desirability), the product/technology (feasibility), and the sales/business (viability). 

All of the senior leaders at the company espouse the zero distance philosophy, work tirelessly and collaborate continuously in order to drive company and product strategy in a way that leverages the opportunity presented by human-centered AI for the enterprise. Our goal is to activate the collective skills and experiences within ourselves and all of our exceptionally talented colleagues to build a transformational company with remarkable products that will ultimately win in the marketplace.

6. What do you think is the most exciting part about working in Human-Centered AI?

We are fortunate to be doing pioneering work in an area that has, in recent times, captured the attention and minds of a very large number of people – especially those in technology, business, government and academia. It isn’t often that the topic of AI, in some shape or form, is not on the front pages of major news publications, and the topmost trending topics on social media and other platforms. The buzz and the hype is everywhere – which is both a benefit and a challenge. 

The benefit is that we all come to work every day motivated and excited about what we are working on. We are certainly in the epicenter of a topical, relevant, disruptive and game-changing moment in our history, as it relates to the impact of AI and related technologies on everything around us. We could have the tools at hand to help us solve some of the most vexing problems at a global scale – those related to climate change, energy, war, poverty, injustice, and alleviation of human suffering. We also believe that we are differentiated from our competitors in the understanding and embrace of human-centered AI, as we believe this will be the dominant type of AI in the future. The challenge is that all the hype and attention also results in excessive noise and an overload of misinformation, misunderstanding, and wasted energy. We need to find ways to not get distracted by this noise, and focus on moving forward on our mission without compromising on our core values. 

Working in this dynamic environment and balancing these challenges and opportunities is the essential thrill and excitement of our work.

7.Why do you think it is important to craft a smarter and humanized AI ecosystem?

Large AI models, especially those which deal with language and imagery, are exceptionally good at mimicking humans. However, unlike humans, these systems have no real-world understanding, and no model of the physical world they can reference to cross-check and validate their outputs. This makes them (at least the current versions of this technology) quite worthless for tasks that need accuracy, reliability, repeatability or precision. Nevertheless, humans tend to anthropomorphize anything that does a reasonably good job of mimicking us – even in a limited manner. This can be a very dangerous thing, as it may lull us into a false belief that large AI systems are actually “intelligent” in the real sense of the word. Some people even go as far as attributing sentience to a purely mechanical (but high-quality) mimicking of a limited set of human behavior while, in fact, machines are nowhere close to having a human-level understanding of the real world. Most likely, this gap will not be closed anytime soon.

To avoid disappointment, and perhaps disaster, while using such systems, people need to be made fully aware of their strengths and limitations. They need to understand deeply how these systems actually work – at what types of tasks they excel, and at what types of tasks their capability is limited and their performance superficial and illusory. In a smarter and more human-centric AI ecosystem, the goal would not be to impress via mimicry – but to add value and improve outcomes that matter to people. The costs of not doing this through proper design of interfaces, life-cycle management, and governance systems for AI models could be quite substantial.

8. In your opinion, how does trust as a factor playing a crucial role in utilizing AI systems and how is human-centered design bridging this gap? 

The guarantee of performance required to entrust a machine with a task is clearly very contextual. For example, if a mistake can make the difference between life and death (like in an autonomously driven vehicle), the level of trust needed between the machine driver and the human passenger would need to be extremely high. On the other hand, if the machine is composing a poem or writing the outline of a news article, then some poor performance can easily be tolerated, and the trust requirement is relaxed a bit. In general, tasks performed in typical enterprises by professionals using AI/ML apps and tools (e.g. decision support for financial, procurement, supply chain, or operational tasks) require a higher level of accuracy and trust than consumer use-cases.

A key thing about trust is that it, like a good reputation, is hard to win and is easily lost. Machines also start from a position of deficit in trust from humans – largely because humans hold machines to a higher standard than they do other humans. For example, humans may tolerate a minor accident from a human driver while they are in a vehicle with them, and get right back into the car to complete the ride. However, they may insist that the car be returned to the service center, fully fixed and reprogrammed before they would reluctantly get back in a car that was crashed by a machine.

Human-centered AI platforms which monitor, govern, diagnose, and retrain errant or deteriorating AI systems ensure that problems (like performance drift) are diagnosed early and fixed before the hard-earned trust in the machine is lost. At Vianai, we have built a platform for monitoring and continuous operations of AI/ML workloads at scale. This system allows for monitoring of various types of drift (input drift, feature drift, output drift, outcome drift, etc.), root-cause analysis to determine the underlying reasons for any detected drift, automated model retraining, and validation prior to the re-deployment of the improved model.

9. Having an extensive experience in the field, please brief us about the emerging trends of the new generation and how you plan to fulfill the dynamic needs of the ever-evolving space. 

The next generation of engines that will power applications like ChatGPT and Midjourney etc. is widely anticipated to be orders of magnitude larger and more complex than the current models. The new models will be trained (at great expense) with a vastly larger dataset and will run on a much larger compute footprint. The achievements of these more advanced systems will continue to be impressive, but some fundamental issues are unlikely to be addressed without an overhaul of how such systems are built. Meanwhile, the permeation of these technologies into the everyday lives of ordinary people, and their adoption into enterprise activities and business processes, will also continue to grow – and the human-centricity in the design of the tools will become increasingly important. There is also likely to be some progress in foundational advancement in the science of AI which can address some of the structural flaws of such systems – both within academia and in corporate research labs.

We plan to fulfill the dynamic needs of this evolving space in two ways – first, by architecting our platform and tools to be capable of operating at immense speed and scale, well beyond the median requirement for the enterprise today. Second, we will bring to market capabilities for the massive improvement in the cost and performance  both of building and running such systems. As these technologies continue to grow in scale and scope, we hope to remain relevant by investing up-front in those capabilities which will best serve to future-proof our offerings and remain relevant in the long run.

10. How do you envision scaling both–Vianai Systems and your growth curve in the year 2023?

We expect 2023 to be a critical year for the company – both in terms of bringing to market some of the most exciting application and platform features available in the enterprise AI space, and in growing our base of productive and referenceable customers. We believe we are in a great position to take advantage of some of the momenta that have been generated in the space by high-profile players like Microsoft, OpenAI, Google, etc, but also be differentiated in our offering in a way that makes sense and resonates with our enterprise customers. We expect to get more adoption of the applications we have already released into the market (Dealtale.com, hila.ai, etc), introduce new high-value applications which are still under development, and also see significant uptake of our platform for monitoring, continuous operations, and performance optimization. 

At a personal level, I fully expect to learn something new every single day and be impressed by what our team of exceptionally talented colleagues will achieve during the year. These are truly unique and disruptive times (in a good way) – and it is a privilege to have the opportunity to have both a front seat view, and even occasionally be in the driver’s seat of the big and meaningful advancements happening in this field.

Dr. Sanjay Rajagopalan 

Chief Design & Strategy Officer at Vianai Systems

Dr. Rajagopalan has extensive experience and background in Human-centered Design as it relates to the Enterprise – in products, processes, organizations, and services. He is currently Chief Design & Strategy Officer at Vianai Systems, a Palo Alto, Calif., based company with a mission to bring human-centered AI to life in enterprises worldwide, and help companies realize the full potential of their AI investments. Previously, Sanjay was the SVP and Head of Design & Research at Infosys and the SVP of Design and Special Projects at SAP. He holds a Ph.D. in manufacturing and design from Stanford University.

The post AITech Interview with Dr. Sanjay Rajagopalan, Chief Design & Strategy Officer at Vianai Systems first appeared on AI-Tech Park.

]]>