AI-Tech Interview - AI-Tech Park https://ai-techpark.com AI, ML, IoT, Cybersecurity News & Trend Analysis, Interviews Tue, 06 Aug 2024 11:44:06 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.16 https://ai-techpark.com/wp-content/uploads/2017/11/cropped-ai_fav-32x32.png AI-Tech Interview - AI-Tech Park https://ai-techpark.com 32 32 AI-Tech Interview with Geoffrey Peterson, Vice President of Data & Analytics at Alight https://ai-techpark.com/ai-tech-interview-with-geoffrey-peterson/ Tue, 06 Aug 2024 13:30:00 +0000 https://ai-techpark.com/?p=175407 Discover Geoffrey Peterson’s take on AI’s transformative role in employee experience and the future of data-driven decision-making. Geoffrey, can you provide a brief overview of your role as the Vice President of Data Analytics at Alight and your expertise in AI and data analytics within the HR domain? I look...

The post AI-Tech Interview with Geoffrey Peterson, Vice President of Data & Analytics at Alight first appeared on AI-Tech Park.

]]>
Discover Geoffrey Peterson’s take on AI’s transformative role in employee experience and the future of data-driven decision-making.

Geoffrey, can you provide a brief overview of your role as the Vice President of Data Analytics at Alight and your expertise in AI and data analytics within the HR domain?

I look after Alight’s AI, personalization, and analytic capabilities from a product and data science perspective. This includes Alight’s chatbots, search engines, personalized nudges, and recommendations capabilities we provide to our clients and their employees as well as some of the AI-enabled automations we’re putting in-place to deliver high-quality ongoing service.  

We’re continuously enhancing our capabilities. For example, We recently unveiled Alight LumenAI, our next-generation AI engine to power Alight Worklife ®, Alight’s employee experience platform. 

We observed three consistent imperatives for creating leap-frog HR AI capabilities: 

1. Adoption of Generative AI (Gen-AI) 

2. Tying together AI capabilities with one unified view of an employee

3. The use of ever-growing internal and external datasets to improve model performance.  

That’s why we launched Alight LumenAI – to ensure we could continue bringing market-leading AI capabilities to our HR clients.

I’ve been passionate about AI and data-enabled SaaS products for a long time, regardless of sector, and that’s reflected in my prior roles building AI-powered experiences in cybersecurity at SecurityScorecard, finance at Bloomberg, or consumer goods at Arena AI.

I joined Alight because applying AI and data science to the Human Resources (HR) space is a chance to deploy AI “for good” – ensuring people are enrolling in the right benefits, preparing appropriately for retirement, having a seamless employee experience, and generally maximizing the wellbeing opportunities offered by their employers.

Right now is an especially exciting time to be at Alight: our clients are being pushed by their CEOs to demonstrate transformational AI strategies within HR and the AI capabilities Alight offers fit very well into these strategies and can deliver  transformational wins.

AI and employee experience:

How do you envision generative AI and AI-powered platforms shaping the future of the employee experience in the workplace?

There’s a baseline level of transformation happening everywhere, where most of the tools we use to do our work are getting generative AI upgrades. 

Taking a step back, we broadly see AI fitting into 5 categories within the HR domain – and these mirror the capabilities that we bring to market for our clients:

  • AI Personalization: capabilities that drive a greater than 10% increase in targeted client HR outcomes through personalized, “next best action” content
  • AI Assistance: capabilities with natural language/intent models to maximize digital engagement, supporting a 90% self-service rate
  • AI Recommendation: capabilities providing automated decision support and choice optimization for benefits and care, saving employees on average $500 in premium expenses annually
  • AI Insights: capabilities with data trend analysis for high-precision employer analytics to identify hotspots in the employee experience and HR processes
  • AI Automation: capabilities that streamline repetitive workflows, such as document processing or at-scale call monitoring

Whereas in the past, an HR team might adopt a few of the above capabilities, we’re seeing that teams succeeding with AI transformation are adopting tech across all 5 of those categories.

Can you give examples of how AI-driven innovations have already improved the employee experience in organisations?

 We’ve been working with clients to deploy AI for years, even before the surge in generative AI interest that has taken place over the last year.  A great example we’ve seen is through our Interactive Virtual Assistant (IVA), or chatbot, that has helped employees answer their benefits questions in a personalized and self-service way and helps drive a 90%+ digital interaction rate (so folks aren’t needing to get in touch with a call center).  

When we launched IVA about 5 years ago, its initial performance was “ok” – but in the intervening years we’ve spent millions of dollars on teams tuning the algorithm based on the results of performance across 30M interactions with employees – so that now our IVA offers market-leading performance. 

It’s an important lesson to remember: AI systems often require ongoing maintenance and investment by professionals to achieve differentiated performance.  Having a “human in the loop” is incredibly important.

Our AI-powered IVA continues to see increased engagement from employees and was recently enhanced to also execute transactions – for example, by allowing employees to re-enroll in their health coverage plans during annual enrollment.  

We’re also excited to be piloting a GenAI-enhanced version of our IVA, powered by Alight LumenAI, that provides more holistic and helpful answers to questions where information was locked-up in complicated policy or benefits documents.  The results have been pretty spectacular – one of our clients when they used it for the first time said, “this is amazing, can we just roll this out now!”

Efficiency and productivity:

In what specific ways can AI enhance efficiency and productivity for both employees and employers in today’s evolving work environment?

In the HR vertical, efficiency is often about trying to reduce call, ticket and email volume for HR teams so that work shifts away from repetitive administrative employee needs and towards more consultative high-value activities.  

Anything AI can do to reduce the volume of administrative calls and tickets is immensely helpful.  AI can help HR teams diagnose, at scale, what is driving the high call and ticket volumes to shorten what are often very long process-improvement cycles…and it can also help create more effective interception-points to help individuals self-service their needs.  

For example, imagine an employee with an HR need logging into their internal HR portal, and then using an IVA chatbot to try to answer their question, and then using a voice-based Interactive Virtual Response (IVR) call-routing system when they call the call center. That’s three interception points where AI has an opportunity to help an employee self-service for a better, faster experience before they get to an agent.

Intelligent Document Processing is another great example of how we partner with clients to improve experience and reduce cost.  Many HR processes still depend on employees submitting documents (deposit checks, birth certifications, etc) and so when we deploy intelligent document processing we reduce the time it takes to process documents and provide feedback to users from 10+ minutes to near-instant.  Not only is this fast feedback loop a better experience for employees, it also tends to reduce calls to the call centers from employees asking about document status.

Personalization:

How can AI enable more personalised experiences for employees, and why is personalization important for overall employee satisfaction and engagement?

Personalization is a pretty broad term and can encompass many things. It can start as basic as knowing what benefits someone is eligible for and only showing them information about those and scale all the way to using AI to nudge or prompt employees according to a next-best action framework.  

Without a baseline of personalization in place, employees can quickly become disengaged by an experience that feels irrelevant to them. Once that baseline is there, you can start to play with personalization that drives outcomes. We partner often with clients on personalized communication campaigns that drive outcomes such as increased 401k contribution, HSA contribution, or increased utilization of specific programs like healthcare navigation.

For example, in March 2024, a pharmaceutical client selected Alight to help improve the financial wellbeing of their workforce through personalized messaging that encouraged the adoption of changed saving behaviors.  With only 75% of employees participating in a Health Savings Account (HSA) and a majority saving below the maximum allowed amount, the company aimed to encourage greater participation in retirement and health savings plans and ensure that employees were taking advantage of the company match to both the 401(k) and HSA.

With a focus on employees who had not yet maximized the value of tax-advantaged accounts, the company partnered with Alight to leverage personalized email and web messaging that would influence saving behaviors. This personalized messaging was made possible with adaptive, “Always On” AI technology that dynamically adjusted engagement strategies to drive up retirement and health savings contributions over time. Upon partnering with the client, Alight took strategic steps to ensure seamless integration and successful implementation of the AI-driven program. 

Key initiatives included:

  • Assessment: The Alight team conducted a comprehensive needs assessment to understand the specific challenges and goals of the client in-depth.
  • Data analysis: Extensive analysis of existing data, including employee participation rates, savings patterns and financial behaviors, were undertaken to inform the AI-driven personalization strategy.
  • Integration planning: Alight collaborated closely with the client to develop an integration plan, identifying areas for personalized content implementation within existing communication channels.
  • Customization framework: A tailored framework for content personalization was established that considered the unique characteristics of the client’s workforce and desired outcomes.
  • Pilot programs: Small-scale pilot programs were initiated to test the effectiveness of the AI-driven approach, allowing for adjustments and refinements before full-scale implementation.
  • Continuous monitoring: The Alight team implemented continuous monitoring and feedback mechanisms to track the success of the AI-driven program and ensure ongoing adaptability.

Post-implementation, Alight conducted thorough assessments of the system’s impact on both 401(k) and HSA participation, and success was substantiated by the substantial increase in employee contributions to both. Additionally, tax savings projections were delivered to show the true value of these funds.  Planning, testing and effective execution of the new AI-driven messaging system took less than six months.

As a result, the pharmaceutical company realized a substantial 17% increase in employees starting or increasing their 401(k) savings. Achieved a commendable 6% increase in employees starting or increasing contributions to the HSA. Notably improved the average 401(k) contribution rate, showcasing an impressive increase of 5.4%, and demonstrated tangible financial impact with an average increase of $1,750 in employee HSA contributions.

Measuring value:

What strategies can companies employ to effectively measure the value derived from their investments in employee experience and well-being initiatives, using data-driven insights?

Most importantly, companies need to know the outcome they’re trying to achieve upfront, and they need to be measuring that on an ongoing basis.  Once that’s in place, there are varying levels of sophistication clients can deploy to measure and attribute changes in the planned outcome to the interactions they are executing.  

The gold-standard for these is treatment vs. control groups, but even basic attribution can give a basic measure of success.  In many cases, if there is a specific action an employer is trying to drive, they can track who took that action after experiencing a personalized nudge, and attribute these to the personalized nudge. Examples of impact we’ve seen using this basic measurement methodology include:

  • Nudges delivered over 6 months to direct employees to financial coaches resulted in a 7% increase in enrollment in HDHP health plans
  • Nudges delivered over 6 months to encourage employees to contribute more to their HSA campaigns resulted in a 33% conversion rate from messaging to action, and the increase in HSA contributions yielded ~$1M in FICA tax savings for the employer 

Data utilisation:

Could you elaborate on how organisations can responsibly utilise employee data to enhance the employee experience while maintaining data privacy and security?

Sure – organizations need to think both about overall data security as well as ensuring appropriate use of data specific to each experience use-case.  In general, the less places you send and store your employee data, the better and the less opportunities there are for data breach or inappropriate use.  When it comes to appropriate use of data, using it to enhance the employee experience should be governed by standard data risk management and security review processes.

Alight’s clients include government entities and defense contractors, so we’ve already been operating in a very robust data and cybersecurity framework.  Last year we formalized our approach to AI risk and now assess all use-cases of AI technology against an 8-part risk framework that looks at things like data risk, bad output risk, bias risk, etc.

Challenges of implementing AI:

What are the common challenges that organisations face when implementing AI-powered solutions for employee experience, and how can they mitigate these challenges?

We like to use an “AI Intrapreneur” framework that lays out five important considerations for any new AI use case and recommend careful consideration –if you’re thoughtful about these five factors you will successfully launch an AI use-case:

  • Pick the right areas – Focus on problems AI can solve now, not speculative future capabilities. Validate with small, low-risk pilots.
  • Resource wisely – Build in-house for differentiated capabilities, use vendors for commoditized capabilities.
  • Avoid high-risk AI uses – AI will make mistakes: don’t use AI where those mistakes have severe consequences.
  • Keep humans in the loop – Humans must oversee AI systems. Design AI use cases for human oversight.
  • Measure extensively – Rigorously measure performance, error rates, biases and business impact. Establish feedback loops.

We took the above approach in our current Gen-AI IVA pilot – testing with a small number of users at a small set of clients, building some of the technology ourselves so that we could be differentiated in the market, and being very thoughtful about how we keep humans in the loop to ensure accurate answers to employee’s HR-related questions.

Ethical considerations:

Are there ethical considerations organisations should be aware of when integrating AI into employee experience initiatives, and how can they ensure ethical AI practices?

The most important ethical consideration – which we touched on in the above – is understanding what the consequence is of a bad model output and its consequence on a person.

Leadership and management changes:

With the integration of AI, how do you foresee the role of leadership and management evolving in HR and employee experience, and what challenges might this transformation present?

The biggest shift is likely to be that whereas before managers might be managing the quality of output of their team, they will now spend an increasing amount of time managing the quality of an algorithm’s output.  No AI system is perfect, and they all require some amount of human oversight.

Final thoughts:

As AI technologies evolve rapidly, what advice would you offer HR and business leaders to stay informed and leverage the latest AI innovations effectively for employee experiences?

Read and absorb as much as possible and stay curious!  Don’t expect to stay fully up to date – even AI researchers are getting surprised these days by sudden developments in the field.

More generally, be aware of your organization’s overall risk appetite and be comfortable with it – some organizations want to be on the leading edge, others may want to take a more conservative approach – both are OK.

Geoffrey Peterson

Vice President of Data & Analytics at Alight

Geoffrey Peterson is the Vice President of Data and Analytics at Alight Solutions, a role he’s held since 2023. Before joining Alight, he was Global Head of Product Management and Data Governance at Bloomberg and a Senior Product Manager at Security Scorecard. Earlier in his career, he was a Business Analyst and Associate at McKinsey & Company before moving into management roles at South African Breweries Limited. Peterson earned a BA in Computer Science and Economics from Harvard University and an ME in Computer Science from Cornell University.

The post AI-Tech Interview with Geoffrey Peterson, Vice President of Data & Analytics at Alight first appeared on AI-Tech Park.

]]>
AI-Tech Interview with Leslie Kanthan, Chief Executive Officer and Founder at TurinTech AI https://ai-techpark.com/ai-tech-interview-with-leslie-kanthan/ Tue, 18 Jun 2024 13:30:00 +0000 https://ai-techpark.com/?p=169756 Learn about code optimization and its significance in modern business.

The post AI-Tech Interview with Leslie Kanthan, Chief Executive Officer and Founder at TurinTech AI first appeared on AI-Tech Park.

]]>
Learn about code optimization and its significance in modern business.

Background:

Leslie, can you please introduce yourself and share your experience as a CEO and Founder at TurinTech?

As you say, I’m the CEO and co-founder at TurinTech AI. Before TurinTech came into being, I worked for a range of financial institutions, including Credit Suisse and Bank of America. I met the other co-founders of TurinTech while completing my Ph.D. in Computer Science at University College London. I have a special interest in graph theory, quantitative research, and efficient similarity search techniques.

While in our respective financial jobs, we became frustrated with the manual machine learning development and code optimization processes in place. There was a real gap in the market for something better. So, in 2018, we founded TurinTech to develop our very own AI code optimization platform.

When I became CEO, I had to carry out a lot of non-technical and non-research-based work alongside the scientific work I’m accustomed to. Much of the job comes down to managing people and expectations, meaning I have to take on a variety of different areas. For instance, as well as overseeing the research side of things, I also have to understand the different management roles, know the financials, and be across all of our clients and stakeholders.

One thing I have learned in particular as a CEO is to run the company as horizontally as possible. This means creating an environment where people feel comfortable coming to me with any concerns or recommendations they have. This is really valuable for helping to guide my decisions, as I can use all the intel I am receiving from the ground up.

To set the stage, could you provide a brief overview of what code optimization means in the context of AI and its significance in modern businesses?

Code optimization refers to the process of refining and improving the underlying source code to make AI and software systems run more efficiently and effectively. It’s a critical aspect of enhancing code performance for scalability, profitability, and sustainability.

The significance of code optimization in modern businesses cannot be overstated. As businesses increasingly rely on AI, and more recently, on compute-intensive Generative AI, for various applications — ranging from data analysis to customer service — the performance of these AI systems becomes paramount.

Code optimization directly contributes to this performance by speeding up execution time and minimizing compute costs, which are crucial for business competitiveness and innovation.

For example, recent TurinTech research found that code optimization can lead to substantial improvements in execution times for machine learning codebases — up to around 20% in some cases. This not only boosts the efficiency of AI operations but also brings considerable cost savings. In the research, optimized code in an Azure-based cloud environment resulted in about a 30% cost reduction per hour for the utilized virtual machine size.

Code optimization in AI is all about maximizing results while minimizing inefficiencies and operational costs. It’s a key factor in driving the success and sustainability of AI initiatives in the dynamic and competitive landscape of modern businesses.

Code Optimization:

What are some common challenges and issues businesses face with code optimization when implementing AI solutions?

Businesses implementing AI solutions often encounter several challenges with code optimization, mainly due to the dynamic and complex nature of AI systems compared to traditional software optimization. Achieving optimal AI performance requires a delicate balance between code, model, and data, making the process intricate and multifaceted. This complexity is compounded by the need for continuous adaptation of AI systems, as they require constant updating to stay relevant and effective in changing environments.

A significant challenge is the scarcity of skilled performance engineers, who are both rare and expensive. In cities like London, costs can reach up to £500k per year, making expertise a luxury for many smaller companies.

Furthermore, the optimization process is time- and effort-intensive, particularly in large codebases. It involves an iterative cycle of fine-tuning and analysis, demanding considerable time even for experienced engineers. Large codebases amplify this challenge, requiring significant manpower and extended time frames for new teams to contribute effectively.

These challenges highlight the necessity for better tools to make code optimization more accessible and manageable for a wider range of businesses.

Could you share some examples of the tangible benefits businesses can achieve through effective code optimization in AI applications?

AI applications are subject to change along three axes: model, code, and data. At TurinTech, our evoML platform enables users to generate and optimize efficient ML code. Meanwhile, our GenAI-powered code optimization platform, Artemis AI, can optimize more generic application code. Together, these two products help businesses significantly enhance cost-efficiency in AI applications.

At the model level, different frameworks or libraries can be used to improve model efficiency without sacrificing accuracy. However, transitioning an ML model to a different format is complex and typically requires manual conversion by developers who are experts in these frameworks.

At TurinTech AI, we provide advanced functionalities for converting existing ML models into more efficient frameworks or libraries, resulting in substantial cost savings when deploying AI pipelines.

One of our competitive advantages is our ability to optimize both the model code and the application code. Inefficient code execution, which consumes excess memory, energy, and time, can be a hidden cost in deploying AI systems. Code optimization, often overlooked, is crucial for creating high-quality, efficient codebases. Our automated code optimization features can identify and optimize the most resource-intensive lines of code, thereby reducing the costs of executing AI applications.

Our research at TurinTech has shown that code optimization can improve the execution time of specific ML codebases by up to around 20%. When this optimized code was tested in an Azure-based cloud environment, we observed cost savings of about 30% per hour for the virtual machine size used. This highlights the significant impact of optimizing both the model and code levels in AI applications.

Are there any best practices or strategies that you recommend for businesses to improve their code optimization processes in AI development?

Code optimization leads to more efficient, greener, and cost-effective AI. Without proper optimization, AI can become expensive and challenging to scale.

Before embarking on code optimization, it’s crucial to align the process with your business objectives. This alignment involves translating your main goals into tangible performance metrics, such as reduced inference time and lower carbon emissions.

Empowering AI developers with advanced tools can automate and streamline the code optimization process, transforming what can be a lengthy and complex task into a more manageable one. This enables developers to focus on more innovative tasks.

In AI development, staying updated with AI technologies and trends is crucial, particularly by adopting a modular tech stack. This approach not only ensures efficient code optimization but also prepares AI systems for future technological advancements.

Finally, adopting eco-friendly optimization practices is more than a cost-saving measure; it’s a commitment to sustainability. Efficient code not only reduces operational costs but also lessens the environmental impact. By focusing on greener AI, businesses can contribute to a more sustainable future while reaping the benefits of efficient code.

Generative AI and Its Impact:

Generative AI has been a hot topic in the industry. Could you explain what generative AI is and how it’s affecting businesses and technology development?

Generative AI, a branch of artificial intelligence, excels in creating new content, such as text, images, code, video, and music, by learning from existing datasets and recognizing patterns.

Its swift adoption is ushering in a transformative era for businesses and technology development. McKinsey’s research underscores the significant economic potential of Generative AI, estimating it could contribute up to $4.4 trillion annually to the global economy, primarily through productivity enhancements.

This impact is particularly pronounced in sectors like banking, technology, retail, and healthcare. The high-tech and banking sectors, in particular, stand to benefit significantly. Generative AI is poised to accelerate software development, revolutionizing these industries with increased efficiency and innovative capabilities. We have observed strong interest from these two sectors in leveraging our code optimization technology to develop high-performance applications, reduce costs, and cut carbon emissions.

Are there any notable applications of generative AI that you find particularly promising or revolutionary for businesses?

Generative AI presents significant opportunities for businesses across various domains, notably in marketing, sales, software engineering, and research and development. According to McKinsey, these areas account for approximately 75% of generative AI’s total annual value.

One of the standout areas of generative AI application is in data-driven decision-making, particularly through the use of Large Language Models (LLMs). LLMs excel in analyzing a wide array of data sources and streamlining regulatory tasks via advanced document analysis. Their ability to process and extract insights from unstructured text data is particularly valuable. In the financial sector, for instance, LLMs enable companies to tap into previously underutilized data sources like news reports, social media content, and publications, opening new avenues for data analysis and insight generation.

The impact of generative AI is also profoundly felt in software engineering, a critical field across all industries. The potential for productivity improvements here is especially notable in sectors like finance and high-tech. An interesting trend in 2023 is the growing adoption of AI coding tools by traditionally conservative buyers in software, such as major banks including Citibank, JPMorgan Chase, and Goldman Sachs. This shift indicates a broader acceptance and integration of AI tools in areas where they can bring about substantial efficiency and innovation.

How can businesses harness the potential of generative AI while addressing potential ethical concerns and biases?

The principles of ethical practice and safety should be at the heart of implementing and using generative AI. Our core ethos is the belief that AI must be secure, reliable, and efficient. This means ensuring that our products, including evoML and Artemis AI, which utilize generative AI, are carefully crafted, maintained, and tested to confirm that they perform as intended.

There is a pressing need for AI systems to be free of bias, including biases present in the real world. Therefore, businesses must ensure their generative AI algorithms are optimized not only for performance but also for fairness and impartiality. Code optimization plays a crucial role in identifying and mitigating biases that might be inherent in the training data and reduces the likelihood of these biases being perpetuated in the AI’s outputs.

More broadly, businesses should adopt AI governance processes that include the continuous assessment of development methods and data and provide rigorous bias mitigation frameworks. They should scrutinize development decisions and document them in detail to ensure rigor and clarity in the decision-making process. This approach enables accountability and answerability.

Finally, this approach should be complemented by transparency and explainability. At TurinTech, for example, we ensure our decisions are transparent company-wide and also provide our users with the source code of the models developed using our platform. This empowers users and everyone involved to confidently use generative AI tools.

The Need for Sustainable AI:

Sustainable AI is becoming increasingly important. What are the environmental and ethical implications of AI development, and why is sustainability crucial in this context?

More than 1.3 million UK businesses are expected to use AI by 2040, and AI itself has a high carbon footprint. A University of Massachusetts Amherst study estimates that training a single Natural Language Processing (NLP) model can generate close to 300,000 kg of carbon emissions.

According to an MIT Technology Review article, this amount is “nearly five times the lifetime emissions of the average American car (and that includes the manufacture of the car itself).” With more companies deploying AI at scale, and in the context of the ongoing energy crisis, the energy efficiency and environmental impact of AI are becoming more crucial than ever before.

Some companies are starting to optimize their existing AI and code repositories using AI-powered code optimization techniques to address energy use and carbon emission concerns before deploying a machine learning model. However, most regional government policies have yet to significantly address the profound environmental impact of AI. Governments around the world need to emphasize the need for sustainable AI practices before it causes further harm to our environment.

Can you share some insights into how businesses can achieve sustainable AI development without compromising on performance and innovation?

Sustainable AI development, where businesses maintain high performance and innovation while minimizing environmental impact, presents a multifaceted challenge. To achieve this balance, businesses can adopt several strategies.

Firstly, AI efficiency is key. By optimizing AI algorithms and code, businesses can reduce the computational power and energy required for AI operations. This not only cuts down on energy consumption and associated carbon emissions but also ensures that AI systems remain high-performing and cost-effective.

In terms of data management, employing strategies like data minimization and efficient data processing can help reduce the environmental impact. By using only the data necessary for specific AI tasks, companies can lower their storage and processing requirements.

Lastly, collaboration and knowledge sharing in the field of sustainable AI can spur innovation and performance. Businesses can find novel ways to develop AI sustainably without compromising on performance or innovation by working together, sharing best practices, and learning from each other.

What are some best practices or frameworks that you recommend for businesses aiming to integrate sustainable AI practices into their strategies?

Creating and adopting energy-efficient AI models is particularly necessary for data centers. While this is often overlooked by data centers, using code optimization means that traditional, energy-intensive software and data processing tasks will consume significantly less power.

I would then recommend using frameworks such as a carbon footprint assessment to monitor current output and implement plans for reducing these levels. Finally, overseeing the lifecycle management of AI systems is crucial, from collecting data and creating models to scaling AI throughout the business.

Final Thoughts:

In your opinion, what key takeaways should business leaders keep in mind when considering the optimization of AI code and the future of AI in their organizations?

When considering the optimization of AI code and its future role in their organizations, business leaders should focus on several key aspects. Firstly, efficient and optimized AI code leads to better performance and effectiveness in AI systems, enhancing overall business operations and decision-making.

Cost-effectiveness is another crucial factor, as optimized code can significantly reduce the need for computational resources. This lowers operational costs, which becomes increasingly important as AI models grow in complexity and data requirements. Moreover, future-proofing an organization’s AI capabilities is essential in the rapidly evolving AI landscape, with code optimization ensuring that AI systems remain efficient and up-to-date.

With increasing regulatory scrutiny on AI practices, optimized code can help ensure compliance with evolving regulations, especially in meeting ESG (Environmental, Social, and Governance) compliance goals. It is a strategic imperative for business leaders, encompassing performance, cost, ethical practices, scalability, sustainability, future-readiness, and regulatory compliance.

As we conclude this interview, could you provide a glimpse into what excites you the most about the intersection of code optimization, AI, and sustainability in business and technology?

Definitely. I’m excited about sustainable innovation, particularly leveraging AI to optimize AI and code. This approach can really accelerate innovation with minimal environmental impact, tackling complex challenges sustainably. Generative AI, especially, can be resource-intensive, leading to a higher carbon footprint. Through code optimization, businesses can make their AI systems more energy-efficient.

Secondly, there’s the aspect of cost-efficient AI. Improved code efficiency and AI processes can lead to significant cost savings, encouraging wider adoption across diverse industries. Furthermore, optimized code runs more efficiently, resulting in faster processing times and more accurate results.

Do you have any final recommendations or advice for businesses looking to leverage AI optimally while remaining ethically and environmentally conscious?

I would say the key aspect to embody is continuous learning and adaptation. It’s vital to stay informed about the latest developments in AI and sustainability. Additionally, fostering a culture of continuous learning and adaptation helps integrate new ethical and environmental standards as they evolve.

Leslie Kanthan

Chief  Executive Officer and Founder at TurinTech AI

Dr Leslie Kanthan is CEO and co-founder of TurinTech, a leading AI Optimisation company that empowers businesses to build efficient and scalable AI by automating the whole data science lifecycle. Before TurinTech, Leslie worked for financial institutions and was frustrated by the manual machine learning developing process and manual code optimising process. He and the team therefore built an end-to-end optimisation platform – EvoML – for building and scaling AI.

The post AI-Tech Interview with Leslie Kanthan, Chief Executive Officer and Founder at TurinTech AI first appeared on AI-Tech Park.

]]>
AI-Tech Interview with Jim Milton, Chief Strategy Officer at Smart Recruiters https://ai-techpark.com/aitech-interview-with-jim-milton-cso-at-smart-recruiters/ Tue, 09 Apr 2024 13:30:00 +0000 https://ai-techpark.com/?p=161539 Explore the effects of impending AI regulations on businesses and steps to prepare. Partner with SmartRecruiters for their commitment to regulatory readiness and prioritize risk management as a core objective. Jim, could you please introduce yourself and tell us about your role at SmartRecruiters? As Chief Strategy Officer, I help...

The post AI-Tech Interview with Jim Milton, Chief Strategy Officer at Smart Recruiters first appeared on AI-Tech Park.

]]>
Explore the effects of impending AI regulations on businesses and steps to prepare. Partner with SmartRecruiters for their commitment to regulatory readiness and prioritize risk management as a core objective.

Jim, could you please introduce yourself and tell us about your role at SmartRecruiters?

As Chief Strategy Officer, I help our company determine where to compete and how to win. This in part means playing the role of pragmatic futurist, helping us see around corners as much as possible. It’s not easy, and no one has a crystal ball. That said, I have been fortunate enough to anticipate and ride key tech waves throughout my career. I founded a digital music startup in 1998 (before Napster), filed a patent application on non fungible objects in 2000, and was an early participant in the social recruiting movement (employee 300 at LinkedIn and on the management team at SelectMinds which launched an industry first, social employee referral system). I am currently working with my colleagues at SmartRecruiters to anticipate the future of talent acquisition software.

What are some of the challenges and opportunities facing organizations with the rise of AI? 

There are many. On one end of the spectrum, there’s a philosophical extreme of safety-ism or deceleration, which is a stance of presuming maximum AI risk and therefore moving very slowly or not at all to embrace the technology. This is dangerous, because it is becoming clear that businesses will either ride this huge wave, or be knocked down by it. To the other extreme, businesses must beware of opportunistic startups that leverage AI/ML with reckless abandon for ethics and regulations. There’s both fear and excitement from organizations around the impact of AI? On the fear side, many are concerned about job displacement. On the excitement side, there is hope AI will further automate tedious tasks. Can you elaborate on that?

Organizations and individuals with a bias for bureaucracy fear AI because it will lay all organizational efficiencies bare. These fears are psychologically rooted. At our core, most humans are, to one degree or another, afraid of change and to making a commitment to lifelong learning – which is required to maintain relevance in the job market. We simply need to commit. 

Is SmartRecruiters harnessing AI in any way internally to fuel your processes?

My team is exploring internal tooling for automating marketing tasks for example. A colleague of mine just recently attended an AI conference and returned with some next practices to test. Every department is exploring possibilities.

I understand SmartRecruiters also uses AI to better serve its customers. Can you tell us a bit about that?

Our SmartAssistant product, for example, helps recruiters save countless hours of busy work by replacing optical matching (reading hundreds of resumes) with automated matching that compares resumes to jobs descriptions at scale to predict which candidates could make a good fit.

There has been talk of AI injecting bias into the recruiting / screening process. How do you view that?

It is important to note that the stakes are higher when one trusts AI to make decisions autonomously, vs when AI is used as a Co-Pilot to inform human decisions. Right now, it is too soon in my opinion to let an AI run a hiring process on full autopilot given bias and other risks. The specific challenge of bias in an interesting one. For example, if an AI is trained to “clone” a company’s top performers who also happen to be a homogenous group that was hired without regard to diversity, there is a risk of perpetuating bias. This is why I do not personally advocate for such an approach. In the end, AI is as biased or unbiased as the strategy behind its application.

There’s looming AI regulation. How will that impact organizations? And what should they do to get ready?

No one knows for certain what the regulatory landscape will look like down the road. It’s important therefore to work with vendors that have a proven track record of taking regulatory readiness very seriously. SmartRecruiters is one such organization. In light of the pace of change on this front, we have elevated Risk Management to become one of our top corporate objectives – as important as profitably and growth. We believe we must continuously earn and renew our trust with customers, so we resource Risk reduction and make it a highly visible, cross functional priority. 

What are some best practices you can share for any organization looking to embrace AI in their organization?

It’s key to start in the problem space rather than formulating AI solutions in search or problems. I like using jobs to be done as an analytic framework for unpacking needs and identifying outcomes that I’d like to improve, such as increasing the speed or accuracy of a process. After prioritizing problems and desired outcomes, one can think about attacking them with intelligent automation.

What advice would you give a budding professional looking to work in the HR tech field?

Don’t be like me. Spend some time thinking not about the future, but instead about the timeless aspects of HR that will never change and become a student of HR’s core, “infinite game.” Build a deep and diverse human skill set that is AI proof. Then, or in parallel, start tinkering with AI and no-code development tools and create your first set of widgets that help you automate mundane tasks. You will become an invaluable asset to any HR team and will future-proof your career.

Jim Milton

Chief Strategy Officer at Smart Recruiters

With over 20 years of experience in software startups and scale-ups, Jim is a seasoned executive leader and strategist who aligns product and GTM functions to launch innovative products, grow revenue, and win markets. He has led two successful exits; SelectMinds->Oracle (HR Tech) and Portfolium->Instructure (EdTech).Jim is currently the Chief Strategy Officer at SmartRecruiters, a leading talent acquisition (SaaS) platform that enables hiring success for thousands of customers worldwide. Jim is also an active advisor at Tourial and a San Diego Ambassador at Product Marketing Alliance.

The post AI-Tech Interview with Jim Milton, Chief Strategy Officer at Smart Recruiters first appeared on AI-Tech Park.

]]>
AI-Tech Interview with Gopi Sirineni, President and CEO of Axiado https://ai-techpark.com/ai-tech-interview-with-gopi-sirineni/ Tue, 02 Apr 2024 13:30:00 +0000 https://ai-techpark.com/?p=160612 Learn about the primary challenges in securing cloud data centers and 5G networks against cyberattacks, including the targeting of BMCs by cybercriminals. Introduction: As President and CEO at Axiado, please share your background in AI-enabled hardware security. As the President and CEO of Axiado, my journey in AI-enabled hardware security...

The post AI-Tech Interview with Gopi Sirineni, President and CEO of Axiado first appeared on AI-Tech Park.

]]>
Learn about the primary challenges in securing cloud data centers and 5G networks against cyberattacks, including the targeting of BMCs by cybercriminals.

Introduction:

As President and CEO at Axiado, please share your background in AI-enabled hardware security.

As the President and CEO of Axiado, my journey in AI-enabled hardware security has been both challenging and exhilarating. Over the past four years, I’ve leveraged my extensive experience from IDT, Marvell and Qualcomm to drive innovation in this field. My background in the wired and wireless networking industry has been crucial in understanding and advancing these technologies.

I’m often referred to as a ‘thrill-seeking CEO,’ a title that reflects my love for extreme sports like skydiving and bungee jumping, as well as other active sports like basketball and cricket. These activities are more than hobbies for me; they symbolize my approach to business—taking calculated risks, embracing challenges, pushing my limits and constantly striving for excellence.

One of the most exciting technology developments I’ve witnessed in my career is the advent of generative AI. I believe it’s the most significant innovation since the smartphone, with the potential to revolutionize various sectors.

What inspired you to lead Axiado in addressing security challenges in cloud data centres and 5G networks?

In this rapidly evolving threat landscape, Axiado saw an opportunity to provide a new approach to cybersecurity and embarked on a mission to conceive a solution that would fortify existing security frameworks. This solution is designed to be reliable, self-learning, self-defending, AI-driven, and fundamentally anchored within hardware. This ambitious vision ultimately gave birth to the concept of trusted compute/control units (TCUs), a meticulously crafted solution designed from inception to deliver comprehensive security for data center control and management ports.

Overview:

Can you provide an overview of AI-enabled hardware security against ransomware, supply chain, side-channel attacks, and other threats in cloud data centres and 5G networks?

According to IBM Security’s most recent annual Cost of a Data Breach Report, the average cost of a data breach reached a record high of $4.45 million in 2023. The report concluded that AI technology had the greatest impact on accelerating the speed of breach identification and containment. In fact, organizations that fully deployed AI cybersecurity approaches typically experienced 108-day shorter data breach lifecycles and significantly lower incident costs (on average, nearly $1.8 million lower) compared to organizations without AI these technologies.

The ability of a hardware-anchored, AI-driven security platform to continuously monitor and perform run-time attestation of cloud containers, platform operating systems, and firmware creates efficiencies that help reduce time spent investigating potential threats. A hardware solution that integrates AI into a chip can analyze behaviors and CPU usage. This enables it to immediately investigate anomalies in user activity. With this approach, networks can no longer be infiltrated because of software vulnerabilities or porous firmware. AI technology enables heterogeneous platforms that include root-of-trust (RoT) and baseboard management controllers (BMCs) to offer hierarchy and security manageability. By deterring cybercrime at the hardware level, the industry can finally address the long-standing shortfalls of online security.

How does Axiado contribute to AI-driven hardware security in these environments?

Axiado TCUs harness the power of intelligent, on-chip AI to thoroughly scrutinize access sessions, detect anomalies, and monitor the boot process for potential side-channel attacks. These side-channel attacks encompass subtleties like voltage glitches and thermal anomalies. TCUs respond promptly to identify and neutralize these insidious threats. Furthermore, TCUs have been trained to recognize behavior patterns that are emblematic of known ransomware attacks, a capability honed through the analysis of hardware traces. This pattern recognition enables TCUs to promptly detect and thwart ransomware attacks in real-time, mitigating the potential damage.

Challenges:

What are the primary challenges in securing cloud data centers and 5G networks against cyberattacks?

Cybercriminals often target BMCs to execute their schemes to steal data for ransom, implant malicious code that can cause users to reveal passwords and other sensitive data, or bring down an entire network to cause chaotic service disruptions. These vulnerabilities usually emerge when a third-party program or firmware is installed in a device that allows arbitrary read and write access to a BMC’s physical address. The BMC is a key target for cybercriminals because it is the first processor to run on a server, even before a main processor like the CPU and GPU. As such, hacking the BMC’s firmware can affect every other firmware or software application that runs after it.

In some instances, cybercriminals resort to physical breaches to execute inside-out assaults, further compounding the complexity of the security landscape. In all these scenarios, the adversary gains ingress into the system through some form of credential compromise, whether it is through the act of clicking on malicious links or the loss of credentials.

Can you explain the challenges of side-channel attacks and how AI hardware security solutions address them?

Next-generation networks, particularly in the case of dispersed 5G cellular base stations, often lack the physical security that servers enjoy, making them vulnerable to side-channel attacks aimed at extracting cryptographic keys and protecting sensitive user data. By implementing an on-board TCU solution, specifically tailored for 5G base stations, the network gains enhanced protection against power analysis, voltage glitching, and clock manipulation attacks. Axiado offers the advantages of a security offload card while allowing for additional customization beyond module interface standards.

Innovation:

What innovative solutions has Axiado developed for AI-enabled hardware security in these networks?

Our TCUs introduce a new category of forensic-enabled cybersecurity processors, providing real-time and proactive AI-based threat detection. Multiple cores of AI engines inside the TCUs are specifically trained for each functional model, including sensor/telematics data analysis and reported ransomware attacks. This enables continuous monitoring, detection, prediction, and interception of attacks in real-time. The TCUs offer runtime protection, automation, and advanced mitigation capabilities using AI algorithms. Additionally, the TCUs feature distributed hardware security managers with anti-tamper and anti-counterfeit measures, control/management plane SmartNIC network interfaces, and safeguards against side-channel attacks.

By integrating AI-driven real-time threat mitigation, hardware fingerprints, platform monitoring, and optimization through AI and machine learning, Axiado’s TCUs contribute to creating a safer and more secure digital infrastructure.

How is the intersection of AI and hardware security evolving to counter emerging cyber threats?

Hardware-based detection involves specialized hardware devices that monitor system behavior and detect signs of an attack by monitoring CPU usage, disk activity, and network traffic. Network packet behavior anomaly detection involves monitoring network traffic and analyzing packets to identify unusual patterns or behaviors that may indicate an attack. Hardware-based anomaly detection enables system administrators to detect and prevent ransomware attacks before they cause significant damage.

CPU performance monitor counters detect attacks by identifying unusual CPU usage and identify unusual patterns so system administrators can forestall damage. AI engines significantly enhance detection by identifying advanced attack patterns that traditional techniques may not detect. Analyzing large amounts of data and identifying subtle patterns are an integral attribute of AI-based hardware security.

Individual perspective

Gopi, can you share your perspective on the direction and lasting impact of AI-enabled hardware security in the cybersecurity landscape.?

Enterprises still face downtime, productivity loss, and the need to rebuild systems, albeit at a reduced cost. To combat these limitations, I believe the industry must augment hardware itself with intelligence and forensics capabilities.

By incorporating monitoring software and AI-driven attack prevention directly into the hardware, we can reimagine the solution to complement and enhance software-based ransomware detection. This approach can extend to various security functions assigned to the motherboard, such as the baseboard management controller, hardware RoT, trusted platform module, programmable FPGA/CPLD, and management LAN. By tailoring the solution to different server types, we can better align with the unique needs and security requirements of cloud data centers and 5G networks.

Final Thoughts:

As we wrap up this interview, what’s your final message on the future of AI-enabled hardware security for cloud data centres and 5G networks?

The threat of ransomware attacks continues to loom, and relying solely on traditional methods is no longer sufficient. We have the opportunity to take a significant step forward by embracing modular security solutions and integrating hardware-anchored, AI-driven security with software intelligence and forensics. This approach can limit the impact of ransomware attacks and reduce the costs associated with system replacement.

Through innovation and collaborations with industry stakeholders, engineers can proactively protect data centers and 5G networks against the latest cyber threats. By doing so, we ensure the security and integrity of critical information.

Gopi Sirineni

President and CEO of Axiado

Gopi Sirineni is President and CEO of Axiado, a semiconductor company deploying a novel, AI-driven approach to platform security against ransomware, supply chain, side-channel and other cyberattacks. He is a Silicon Valley veteran with more than 25 years of experience in the semiconductor, software and systems industries. Priorto Axiado, Gopi was a vice president of Qualcomm’s Wired/Wireless Infrastructure, creating a market-dominating Wi-Fi and Wi-Fi SON technologies. His career highlights include executive positions at Marvell, AppliedMicro, Cloud Grapes, and Ubicom (acq. by Qualcomm). Gopi’s pioneering foresight into distributed mesh technology created the connected, AI-based home market segment.

The post AI-Tech Interview with Gopi Sirineni, President and CEO of Axiado first appeared on AI-Tech Park.

]]>
AI-Tech Interview with Murali Sastry, SVP Engineering at Skillsoft https://ai-techpark.com/ai-tech-interview-with-murali-sastry/ Tue, 26 Mar 2024 13:30:00 +0000 https://ai-techpark.com/?p=159804 From cultural transformation to pioneering CAISY, discover insights shaping the future of education. Introduction: Murali, Could you begin by providing us with an introduction and detailing your career trajectory as the Senior Vice President, Engineering at Skillsoft? I joined Skillsoft in 2016 as the VP of engineering after a career...

The post AI-Tech Interview with Murali Sastry, SVP Engineering at Skillsoft first appeared on AI-Tech Park.

]]>
From cultural transformation to pioneering CAISY, discover insights shaping the future of education.

Introduction:

Murali, Could you begin by providing us with an introduction and detailing your career trajectory as the Senior Vice President, Engineering at Skillsoft?

I joined Skillsoft in 2016 as the VP of engineering after a career spanning over two decades at IBM, where I led the build out of large-scale enterprise solutions and innovative software products. 2016 was an exciting time to join Skillsoft as the learning industry was undergoing major disruption. To stay competitive, Skillsoft was in the process of building an innovative, AI-driven learning platform called Percipio. With the support of a new leadership team, we were able to build the platform from the ground up and bring it to market within a year.  

The project involved not only building a new product but changing the culture and operations of our technology team, including the launch of a new tech stack built on the AWS public cloud infrastructure. Over the past years, we have grown the product family and organization to include new products and services, and in the process, took ownership to transform the cloud operations organization. 

We managed to modernize how we build, deploy, and support our products in the cloud through continuous integration and deployment to deliver new capabilities to the market at lightning speed while maintaining a highly secure, resilient, and performant learning platform that serves millions of learners. 

Over the years, we built a strong culture of innovation within our engineering team, which is one of the most exciting parts of my job today. Every quarter, we do an innovation sprint, where team members organically produce ideas to advance platform capabilities. Our philosophy is to establish a grassroots mindset to produce innovative ideas that solve our customers’ business problems and improve experiences for our learners. Many of our AI and machine learning innovations have come out of this process, helping to make our platform smarter and our learning experiences more personalized.  

Can you provide a brief introduction to CAISY (Conversation AI Simulator) and its role in Skillsoft’s offerings?

CAISY, which is an AI-based conversation simulator that helps learners build business and leadership skills, was born out of one of our innovation sprints. The original idea was implemented on a simple terminal text-based interface using GPT 3.5, though we saw the power of the concept and decided to progress it to be customer-facing. Skillsoft launched CAISY out of beta in September using generative AI and GPT 4, to help learners practice and role model various business conversations. While Skillsoft has extensive learning content on how business, management, and leadership conversations should be handled, learners can now practice and apply these skills in real time. Developments in generative AI allow us to leverage our knowledge and expertise in this area while providing a hands-on environment for our learners, so that they can practice conversational skills in a safe and secure zone before implementing them in the real world.   

Overview of CAISY: 

How does CAISY simulate real-world conversations, and what types of scenarios can it be used for?

CAISY is built around a system of AI agents that power, simulate, and evaluate the conversation. We have guardrails to ensure that the conversations are on topic, safe, and unbiased. Different AI agents act as subjects and actors, emulating different roles so that the end-to-end conversation mimics a real-world conversation.  

In our initial launch, CAISY offered ten conversation scenarios for practice, including topics across performance management, sales motion, change management, and more. As we continue to build and release more scenarios, we are focusing on key areas like first-time managers and coaching. For example, we’ve found that 60% of first-time managers fail to meet their performance goals in their first two years on the job and 52% struggle with giving feedback to their team members. We believe CAISY can make an enormous impact for our customers in these critical areas.  

What are the key benefits that organizations and learners can expect to derive from using CAISY in their training programs?

CAISY provides a personalized simulated and interactive experience where learners can practice and test their business conversation skills in real time. The evaluation and feedback from CAISY provide a clear understanding of the learner’s conversational and communication abilities. We provide learning resources based on the learner’s skill levels and provide personalized recommendations and pathways to improve their knowledge and capabilities. Organizations can analyze the skill level of their business leaders and implement learning strategies to advance their workforce.  

Using AI Ethically and Responsibly: 

Ethical use of AI, particularly in training and learning applications, is a growing concern. How does Skillsoft ensure that CAISY mitigates biases in training data and maintains ethical AI practices throughout its operation?

Ethical use of AI is a fundamental principle for everything we build. We’ve developed a system of guardrails and test methodologies to ensure that these conversations are unbiased, safe, and relevant for learners. These guardrails ensure that any deviations within the conversations that are found to be unsafe, biased or unethical are immediately stopped. 

Given your expertise, could you shed light on the ethical considerations you personally emphasize when working with AI and technology in education?

Generative AI is here to stay and will continue to evolve at a rapid pace over the next few years. I anticipate Gen AI becoming a necessary tool for future productivity needs, like how email and Excel emerged as critical parts of our daily work lives. Just as GenAI itself will become necessary in the future of work, so too is the need for AI ethics at the organizational and individual level. Organizations that incorporate AI ethics into their corporate DNA – via how they build products and educate their employees – will be most successful.  

Finally, Murali, what advice would you give to aspiring professionals who are looking to follow in your footsteps and make an impact in the field of AI-driven education and technology?

Think big. Imagine ideas that seem impossible today but may be possible in the future. From there, build things step by step.  As they say, Rome was not built in a day. Embrace change, be curious and become a lifelong learner to grow and keep your skills on pace with innovation. Be a technologist at heart and build things to make a difference to the world. 

Murali Sastry

SVP Engineering at Skillsoft

Murali Sastry is a seasoned IT executive with 20+ years of experience in leading large-scale engineering organizations and developing innovative products. Currently residing in Westford, MA, Murali has a strong history of building transformative software products and teams. As the SVP of Engineering at Skillsoft, Murali manages a global team of engineers developing and supporting Skillsoft’s learning platforms. Under his leadership, Skillsoft launched the award-winning AI-powered learning platform “Percipio.”  Before joining Skillsoft, Murali worked at IBM for 20+ years where he was instrumental in competitive SaaS products such as IBM Connections, IBM Sametime, Bluepages and building W3 – IBM’s intranet and workplace for 400K employees. Murali is a proven leader, distinguished by his ability to inspire and manage high-performing teams in today’s dynamic IT landscape. Murali has a Bachelor’s in Electronics from JNTU, India and an MBA from Babson Olin School of Management.

The post AI-Tech Interview with Murali Sastry, SVP Engineering at Skillsoft first appeared on AI-Tech Park.

]]>
AI-Tech Interview with Anar Mammadov, owner and CEO of Senpex Technology https://ai-techpark.com/ai-tech-interview-with-anar-mammadov/ Tue, 05 Mar 2024 13:30:00 +0000 https://ai-techpark.com/?p=157254 Discover how smart warehouses, robotics, sustainability efforts, blockchain, and AI-powered inventory management are shaping the landscape. Introduction Anar, can you introduce yourself and briefly explain your role as CEO of Senpex Technology and what motivated you to start this company? My name is Anar Mammadov and I am the CEO...

The post AI-Tech Interview with Anar Mammadov, owner and CEO of Senpex Technology first appeared on AI-Tech Park.

]]>
Discover how smart warehouses, robotics, sustainability efforts, blockchain, and AI-powered inventory management are shaping the landscape.

Introduction

Anar, can you introduce yourself and briefly explain your role as CEO of Senpex Technology and what motivated you to start this company?

My name is Anar Mammadov and I am the CEO of Senpex Technology. My background involves 18 years of experience developing enterprise solutions in the areas of technology and logistics.

I founded Senpex when I realized no one was providing affordable last-mile logistics services. The company was created to fill that gap. Initially, my individual efforts focused on developing the technology that supports our solutions. Now, I am focused on developing the vision for our future growth and managing our teams with a primary focus on our development, operations, and data analytics teams.

Our primary motivation is seeing our B2B customers benefit from our products and services. Seeing our customers happy always makes me happy. Our desire to continue to meet their needs has motivated us to develop the tools we use to optimize the delivery process and it is inspiring us to build a future in which robotics, smart design, and autonomous vehicles will make last-mile logistics more efficient and effective.

Smart warehouses and robotics

How do smart warehouses and robotics improve grocery delivery, and what are the main benefits?

Smart warehouses leverage technology solutions to improve the efficiency of warehouse management. The technology solutions integrate with the business’s enterprise resource planning system to streamline grocery delivery processes, including ordering and organizing inventory and preparing and fulfilling orders. In many cases, smart warehouses automate grocery delivery processes, such as leveraging sensors and scanners to monitor inventory and trigger automated reordering.

Overall, smart warehouses automate processes, drive higher levels of efficiency, improve visibility and transparency, and enhance the customer experience.

Can you share examples of how smart warehouses and robotics have improved grocery delivery efficiency?

We developed a “Click-to-Collect” software solution that illustrates one way in which smart warehousing can improve the grocery delivery process. The software was designed to address problems we were having with orders not being ready when our delivery teams would arrive at grocery warehouses for pickup. The bottleneck was slowing down the entire delivery process and making it very challenging to meet customers’ demands.

The Click-to-Collect solution integrates with warehouse management tools to automate order fulfillment. It designates collectors who gather and sort the order and couriers who deliver the order. It also provides real-time fulfillment tracking for the warehouse, delivery team, and customer, improving delivery transparency.

By integrating with the warehouse management tools, Click-to-Collect also empowers real-time visibility of stock inventory, which prevents businesses from selling out-of-stock items.

In the area of robotics, we have recently begun utilizing wearable suit technologies to improve the safety of our logistics workers. The suits use robot technology to enhance a human’s physical capabilities and work efficiency. It essentially makes them stronger, which allows them to lift heavier objects and do more lifting without risking injuries.

We’ve also seen robots being used to move pallets of groceries within warehouses. Using robots in that way allows for automation which can enhance warehouse management, improve safety, and streamline fulfillment processes.

Sustainability and eco-friendly delivery

How does Senpex Technology promote sustainability in grocery delivery, and how can others follow suit?

One of the primary ways we promote sustainability is with AI-powered route optimization. By ensuring our delivery vehicles are taking the most efficient routes — which often includes scheduling multi-stop routing and batching multiple products — we minimize their fuel consumption and carbon emissions.

Driving greater efficiencies in last-mile logistics also helps to decrease spoilage, which contributes to greater sustainability. Optimizing warehouse management and delivery processes means inventories are optimized and food moves more quickly from suppliers to consumers, which results in less spoilage.

Others can follow suit by integrating optimization tools into their fulfillment systems. Our Senpex API is a fully automated dispatch system that integrates easily with e-commerce platforms and enterprise resource planning systems to drive higher sustainability through route optimization and other efficiency solutions.

Blockchain and food traceability

How is Senpex Technology using blockchain to enhance food traceability, and what benefits does it offer?

The key benefit blockchain brings to food traceability has to do with the potential it offers for decentralization. By doing away with the need for a centralized database for food tracking, blockchain allows for more efficient, transparent, and reliable systems.

Currently, we are building our platforms to facilitate the shift to a decentralized approach that blockchain can empower. As food traceability shifts in that direction, we will have last-mile logistics systems that are ready to support it.

AI-powered inventory management

How does Senpex Technology use AI in inventory management to optimise grocery delivery, and what results have you observed?

One of the top ways AI can be used in inventory management to optimize delivery is by empowering real-time visibility of warehouse stock. By using AI to automate warehouse management systems, businesses can dynamically update inventory lists. 

This empowers automated reordering, which ensures grocery businesses have the stock they need when they need it. It also prevents customers from placing orders for things that are not in stock, which creates inefficiency in the delivery process and decreases customer satisfaction.

Overview

Can you summarise the key technological advancements shaping the future of grocery delivery beyond autonomous vehicles and SaaS?

The technology that empowers smart warehouses — including AI-driven automations, robotics, and management systems that provide greater accessibility, efficiency, and transparency — is a key advancement in grocery delivery. As more businesses embrace blockchain solutions, they will reshape the ways in which supply chains are monitored and transparency is established across the delivery spectrum. Robotics is also empowering a more efficient future by enhancing human efforts and allowing for groceries to be physically managed through automated processes.

Personal Advice

As a leader in technology-driven grocery delivery, what advice do you have for businesses in this competitive field?

Last-mile logistics is very pricey. In the grocery industry, many businesses are struggling to overcome the challenges posed by the high price of delivery, especially if they are relying on DoorDash or other on-demand solutions. My suggestion for overcoming that challenge is not overpromising when it comes to delivery times.

Build systems that provide a three-hour or four-hour delivery window and make those work. There is a lot more involved with meeting delivery windows than just last-mile delivery. Systems and teams need to be developed to gather products at the warehouse, sort them, package them, label them, and more.

Once you master those systems, then you can think about promising one-hour delivery windows. But be warned that it is a very pricey service to provide.

Vision

What is your vision for Senpex Technology’s future in grocery delivery, and what impact do you aim to achieve?

One of the innovations we are envisioning now is a system that improves efficiency by optimizing the labeling provided on the boxes our drivers pick up. This would fall under the category of warehouse management and aim to ensure shipments are properly delivered to the right customer at the right location.

Empowering greater use of autonomous vehicles is another initiative we are pursuing. By using our API to connect companies to autonomous vehicle delivery options, we believe we can add significant efficiencies to last-mile logistics, especially in dense urban areas.

Final thoughts

Any advice for our readers interested in the future of grocery delivery and technology’s role in it?

One of the biggest opportunities I see for businesses in grocery delivery is developing solutions for addressing out-of-stock items. Grocery and retail businesses need to develop a strategy for addressing this issue, which introduces challenges to delivery efficiency and detracts from the customer experience.

One solution could be to partner with stores that can supplement their supply. If the grocery warehouse doesn’t have the product in their warehouse but a local third-party vendor has it, they could use that vendor as a source, purchase and pick up the product, and still be able to provide the needed product to their customer. That type of system could be driven by integrating the third-party vendor’s online marketplace with the business’s fulfillment system, essentially expanding their inventory.

I believe grocery suppliers who do not find a way to address this are going to have more and more customers seeking alternative stores that can supply their needs.

Anar Mammadov

owner and CEO of  Senpex Technology

Anar Mammadov is the CEO of Senpex Technology. He is a software development professional with more than 18 years of experience in enterprise solutions and mobile app development. He has applied his practical and results-oriented approach to business to create Senpex Technology, a personalized logistics and delivery service that utilizes groundbreaking artificial intelligence to optimize routes and to provide the fastest, most efficient, last-mile delivery resource for businesses. Senpex can be utilized 24/7, with no interruptions to your delivery needs.

The post AI-Tech Interview with Anar Mammadov, owner and CEO of Senpex Technology first appeared on AI-Tech Park.

]]>
AI-Tech Interview with Eric Sugar, President at ProServeIT https://ai-techpark.com/ai-tech-interview-with-eric-sugar-president-at-proserveit/ Tue, 27 Feb 2024 13:30:00 +0000 https://ai-techpark.com/?p=156352 Get insights on key principles, benefits, and best practices for implementing ZTA to enhance organisational security. Eric, could you please introduce yourself and elaborate on your role as president at ProserveIT? Hello, I’m Eric Sugar, President at ProServeIT, my focus is on helping clients set their strategic direction with regards...

The post AI-Tech Interview with Eric Sugar, President at ProServeIT first appeared on AI-Tech Park.

]]>
Get insights on key principles, benefits, and best practices for implementing ZTA to enhance organisational security.

Eric, could you please introduce yourself and elaborate on your role as president at ProserveIT?

Hello, I’m Eric Sugar, President at ProServeIT, my focus is on helping clients set their strategic direction with regards to technology that enables their business.  Eric’s passion is teaching how technology can be leveraged by businesses to enable growth and added value.  As President at ProServeIT  I support our clients and team in creatively deploying and using technology. 

Eric holds a Bachelor of Arts (Economics and Math) from the University of Toronto.

I’m an avid rower, cyclist and hockey player who can put a golf ball in the woods better than most.

Can you provide a concise overview of Zero Trust Architecture and its significance in modern cybersecurity?

Zero Trust Architecture (ZTA) is a security model that assumes that any user, system, or service operating within or outside of an organization’s network perimeter is untrustworthy until proven otherwise. It is based on the principle of “never trust, always verify” and requires strict identity verification for every person and device trying to access resources on a private network, regardless of their location. The principles behind a Zero Trust network include Identity and Access Management (IAM), Data Protection, and Network Segmentation

In the context of ZTA, how does the concept of “never trust, always verify” apply to both internal and external network environments? What are the key implications of this approach for organisations?

In the context of ZTA, the concept of “never trust, always verify” applies to both internal and external network environments. This approach has key implications for organizations, as it requires them to implement strict identity verification and access controls for every person and device trying to access their resources, regardless of their location. This helps organizations mitigate cybersecurity risks and protect sensitive data effectively

What are the key benefits of implementing Zero Trust Architecture, and how does it help organisations mitigate cybersecurity risks and protect sensitive data effectively?

The benefits of implementing ZTA include reducing the attack surface and preventing lateral movement by attackers within the network, as each resource is isolated and protected by granular policies and controls. It also enhances the visibility and monitoring of network activity and behavior, as each request and transaction is logged and analyzed for anomalies and threats. Additionally, it improves the compliance and governance of data and assets, as each access is based on the principle of least privilege and verified by multiple factors.

Could you share some best practices for organisations looking to adopt ZTA? What are the essential steps and considerations in the implementation process?

As for best practices for organizations looking to adopt ZTA, some recommendations include assessing the current network architecture and identifying the assets, services, workflows, and data that need to be protected, defining the policies and controls that govern access to each resource based on the Zero Trust principles, and deploying the ZTA components that enforce the policies and controls across the network.

Can you provide examples or case studies of organisations that have successfully implemented ZTA? What challenges did they face, and how did ZTA help address those challenges?

There are real-world case studies that show Zero Trust in action. For example, a shipping company that operates a fleet of cargo vessels across different regions improved its cybersecurity posture by adopting ZTA. The company identified its key resources, classified them according to their criticality and sensitivity levels, and defined its policies and controls for accessing each resource based on the Zero Trust principles. The company then deployed the ZTA components that enforced the policies and controls across the network, and monitored and evaluated the performance and effectiveness of the ZTA components regularly.

Are there any notable industries or sectors where ZTA has shown exceptional promise or results? What can other organisations learn from these success stories?

Transportation, Finance, Healthcare businesses have all been leveraging ZTA and when implemented they are better able to protect their organization from breach.   Other organizations can look at Identity Access Management as a first workload that they can use ZTA to pilot and learn. 

As ZTA continues to evolve, what do you see as the future of cybersecurity and network security? How does continuous verification play a role in this future landscape?

As ZTA continues to evolve, it is shaping the future of cybersecurity by providing a more effective and adaptive security model for the modern environment. With the increasing sophistication and frequency of cyberattacks, the growing adoption of cloud computing, mobile devices, and the internet of things (IoT), and the changing nature of work, traditional perimeter-based security models are no longer sufficient. Zero Trust provides a more comprehensive and dynamic approach to securing digital assets, and is becoming the new standard for organizations of all sizes and industries.

What advice do you have for organisations looking to strengthen their cybersecurity posture in today’s ever-evolving threat landscape?

As for personal advice for organizations looking to strengthen their cybersecurity posture in today’s ever-evolving threat landscape, it is important for organizations to stay up-to-date with the latest developments in cybersecurity and to implement best practices for protecting their data and assets. This includes adopting a Zero Trust approach to security, implementing strict identity verification and access controls, and continuously monitoring and analyzing network activity for anomalies and threats.

What key takeaways would you like to leave our audience with regarding the importance of adopting ZTA principles in the future of cybersecurity?

The key takeaway for organizations is the importance of adopting ZTA principles in the future of cybersecurity. By implementing a Zero Trust approach to security, organizations can better protect their data and assets from cyber threats and ensure the safety and security of their operations.

Eric Sugar

President at ProServeIT

Whether it’s helping his employees remove roadblocks, educating customers on how various technologies can make their jobs and their lives better, or instructing leaders on the importance of corporate and personal cybersecurity, Eric Sugar, President of ProServeIT, always takes a people-centric approach to his role. With over 25 years in the IT industry, Eric’s been with ProServeIT since its inception in 2002.

He loves seeing what technology can do for people and how technology can have such a positive impact on organizations. But it’s not just about helping those already in their careers – as the father of 3 young daughters, Eric also wants to see future generations succeed, so he has spearheaded several initiatives with local grade schools to introduce boys and girls to the wonders of technology. In his free time, Eric is a big supporter of the Princess Margaret Cancer Foundation and actively participates in many of their annual fundraising activities.

The post AI-Tech Interview with Eric Sugar, President at ProServeIT first appeared on AI-Tech Park.

]]>
AI-Tech Interview with Dr. Shaun McAlmont, Chief Executive Officer at NINJIO Cybersecurity Awareness Training https://ai-techpark.com/ai-tech-interview-with-dr-shaun-mcalmont-ceo-at-ninjio/ Tue, 05 Dec 2023 13:30:00 +0000 https://ai-techpark.com/?p=147969 Learn about the latest cyber threats, ransomware defense, and NINJIO’s innovative training approach. Shaun, could you please introduce yourself and elaborate your role as a CEO of NINJIO?  I’m Shaun McAlmont, CEO of NINJIO Cybersecurity Awareness Training. I came to NINJIO after decades leading organizations in higher education and workforce...

The post AI-Tech Interview with Dr. Shaun McAlmont, Chief Executive Officer at NINJIO Cybersecurity Awareness Training first appeared on AI-Tech Park.

]]>
Learn about the latest cyber threats, ransomware defense, and NINJIO’s innovative training approach.

Shaun, could you please introduce yourself and elaborate your role as a CEO of NINJIO? 

I’m Shaun McAlmont, CEO of NINJIO Cybersecurity Awareness Training. I came to NINJIO after decades leading organizations in higher education and workforce development, so my specialty is in building solutions that get people to truly learn. 

Our vision at NINJIO is to make everyone unhackable, and I lead an inspiring team that approaches cybersecurity awareness training as a real opportunity to reduce organizations’ human-based cyber risk through technology and educational methodologies that really change behavior.

Can you share insights into the most underestimated or lesser-known cyber threats that organisations should be aware of?

The generative AI boom we’re experiencing now is a watershed moment for the threat landscape. I think IT leaders have a grasp of the technology but aren’t fully considering how that technology will be used by hackers to get better at manipulating people in social engineering attacks. Despite the safeguards the owners of large language models are implementing, bad actors can now write more convincing phishing emails at a massive scale. They can deepfake audio messages to bypass existing security protocols. Or they can feed a few pages of publicly available information from a company’s website and a few LinkedIn profiles into an LLM and create an extremely effective spearphishing campaign.

These aren’t necessarily new or lesser-known attack vectors in cybersecurity. But they are completely unprecedented in how well hackers can pull them off now that they’re empowered with generative AI.

With the rise of ransomware attacks, what steps can organisations take to better prepare for and mitigate the risks associated with these threats?

The first and biggest step to mitigating that risk is making sure that everyone in an organization is aware of it and can spot an attack when they see one. It took a ten-minute phone call for a hacking collective to breach MGM in a ransomware attack that the company estimates will cost it over $100 million in lost profits. Every person at an organization with access to a computer needs to be well trained to spot potential threats and be diligent at confirming the validity of their interactions, especially if they don’t personally know the individual with whom they’re supposedly speaking. The organizational cybersecurity culture needs to extend from top to bottom.

Building that overarching cultural change requires constant vigilance, a highly engaging program, and an end-to-end methodological approach that meets learners where they are and connects the theoretical to the real world.

How does NINJIO’s cybersecurity awareness training approach differ from traditional training methods, and what are the primary benefits for organisations that adopt it?

Traditional workforce training is often an annual, one-size-fits-all hours long presentation or video that was designed to check a box for legal compliance or insurance requirements. It wasn’t designed with a sound educational methodology for the average user in mind. And everyone hates it as a waste of time.

NINJIO is completely different because it is so engaging.. We rely on a monthly cadence of short-form video episodes and follow up reminders that takes no more than 7 minutes total to complete. Every episode is relevant because they’re based off of real-life hacks and the reminders deliver the key takeaways in a varied but repetitive way to aid in learning retention.

Paired with our simulated phishing solution, we’re even able personalize content delivery based on an individual’s unique emotional susceptibilities to boost their self-awareness and provide a tailored learning experience. End users actually watch their trainings because we make them engaging. That engagement feeds a base level of vigilance against cyber threats.

What are the most common employee-related cybersecurity vulnerabilities, and how can NINJIO’s training help address these vulnerabilities effectively?

The most common is social engineering. The vast majority of successful breaches – 74% – involve a human element where someone was tricked into making a mistake that allowed a bad actor to access an organization’s system.

Social engineering is about manipulating people’s emotional vulnerabilities so they do something they otherwise wouldn’t. Those vulnerabilities, which we’ve identified as urgency, obedience, fear, opportunity, greed, sociableness, and curiosity, underpin every single social engineering attack.

NINJIO’s solution uses simulated phishing to build a risk profile for each user and then deploys our NINJIO SENSE training content based on that profile so they receive the educational content that is most pertinent to their needs. Every person is susceptible to different techniques in social engineering, so we identify which are most likely to work and help users overcome them.

Could you highlight some best practices for developing a robust cybersecurity posture?

Implement a robust cybersecurity awareness training program. In a world where three quarters of all successful breaches happen due to human error, there is no technological strategy that will offer comprehensive cyber protection for an organization. You have got to train your users because they are the front line.

Make cybersecurity an organizational priority. I can’t stress enough how important leadership is to cybersecurity posture. It cannot be a topic that gets delegated downward on your organizational chart – every single person in an organization, and especially the CEO and other executives, has to be committed to following protocols and staying aware for any cybersecurity effort to work.

Require cybersecurity in your supply chain. Your company works with dozens, if not hundreds, of vendors who have access to your information and maybe your customers’ information. Require that they have cybersecurity controls implemented so you aren’t exposed to third party risk.

How can organisations assess and manage the cybersecurity risks associated with their third-party vendors and supply chain partners?

Much of this happens when organizations are preparing contracts and agreements. Consider the following:

  1. Require that partners or vendors have implemented a cybersecurity awareness training program so you know their employees are up to date on cyber threats.
  2. Implement mandatory cyber incident reporting so you’re able to judge your exposure.
  3. Set up secure information sharing mechanisms that keep sensitive assets secured.

Could you explain the importance of complying with cybersecurity regulations, and how can companies ensure they remain compliant in an ever-changing regulatory landscape?

Failure to comply with cybersecurity regulations brings incredible risk, including regulatory action, significant financial loss, and reputational ruin. Many cybersecurity regulations don’t even require observing what the industry has already established as best practices for basic protection, so meeting regulatory compliance requirements is something any organization should do automatically if it takes its cybersecurity seriously.

And the importance of remaining compliant extends to every company. Any enterprise with a computer system is vulnerable – even those who specialize in cyber protection. Breaches have affected every industry, from startups to corporate institutions.

Remaining compliant requires that organizations dedicate a role to cybersecurity in their organizational chart or hire a consultant whose job it is to raise concerns and keep the organization aware of risks, whether from cyber threats or from falling out of compliance.

Dr. Shaun McAlmont

CEO at NINJIO Cybersecurity Awareness Training

Dr. Shaun McAlmont is CEO of NINJIO Cybersecurity Awareness Training and one of the nation’s leading education and training executives. Prior to NINJIO, he served as President of Career and Workforce Training at Stride, Inc., had a decade-long tenure at Lincoln Educational Services, where he was President and CEO, and served as CEO of Neumont College of Computer Science. His workforce and ed tech experience is supported by early student development roles at Stanford and Brigham Young Universities. He is a former NCAA and international athlete and serves on the BorgWarner and Lee Enterprises boards of directors. He earned his doctoral degree in higher education, with distinction, from the University of Pennsylvania, a master’s degree from the University of San Francisco, and his bachelor’s degree from BYU.

The post AI-Tech Interview with Dr. Shaun McAlmont, Chief Executive Officer at NINJIO Cybersecurity Awareness Training first appeared on AI-Tech Park.

]]>
AITech Interview with Doug Dooley, Data Theorem https://ai-techpark.com/aitech-interview-with-doug-data-theorem/ Tue, 17 Oct 2023 13:00:00 +0000 https://ai-techpark.com/?p=142617 Learn Data Theorem’s approach to securing cloud-native applications and APIs in an exclusive interview covered by AI-TechPark.  Give us a brief background of Data Theorem. Data Theorem was founded back in 2013 by Himanshu Dwivedi, who is a 25+ year veteran in the security industry going back to his days...

The post AITech Interview with Doug Dooley, Data Theorem first appeared on AI-Tech Park.

]]>
Learn Data Theorem’s approach to securing cloud-native applications and APIs in an exclusive interview covered by AI-TechPark. 

Give us a brief background of Data Theorem.

Data Theorem was founded back in 2013 by Himanshu Dwivedi, who is a 25+ year veteran in the security industry going back to his days as a security researcher at @stake. He is one of the co-founders of iSEC Partners, and is an author of six security hacking books. Data Theorem was founded to analyze and secure any modern application, first starting with mobile applications, APIs, SPAs, and serverless and cloud apps. We started by building our Analyzer Engine which was the industry’s only solution that allowed customers to build safer apps that protected data better by applying dynamic run-time analysis on a continuous basis in search of security flaws and data privacy gaps. Today, we are the company that analyzes and secures any modern application – anytime and anywhere – with our advanced AppSec functionality, including the industry’s first automated API discovery and security inspection solution aimed at addressing API security threats introduced by today’s cloud-native application architectures.

What are the biggest challenges organizations face when trying to secure their APIs?

Organizations’ shift to the cloud has introduced new security challenges for application security. If an attacker gains access to your APIs, they can easily bypass security measures and gain access to your cloud-based applications, which can result in data breaches, financial losses, and reputational damage.

API security is critical because APIs are often one of the weakest links in the security chain. Developers often prioritize speed, features, functionality, and ease of use over security, which can leave APIs vulnerable to attacks. Additionally, cloud-native APIs are often exposed directly to the internet, making them accessible to anyone. This can make it easier for hackers to exploit vulnerabilities in your APIs and gain access to your cloud-based applications.

Why has software supply chain security become such an issue these days? What can organizations do about it?

The software supply chain has become increasingly complex and dynamic with the rise of cloud computing, open-source software, and third-party software components and APIs. Widespread damage can occur for organizations if third-party APIs, cloud services, SDKs, and open-source software have security flaws. The use of software bill of materials (SBOMs) has emerged to address some of these issues. SBOMs are a standardized inventory of software components used in a particular product or system, including their versions, dependencies and sources.

However, SBOMs are only as good as the data they contain, and the quality of data can vary depending on the source and the method of collection, particularly around the application software stack of APIs, cloud services, and SDKs. SBOM inventory is constantly changing, and being able to leverage their up-to-date data requires continuous runtime analysis and dynamic inventory.

Coupled with SBOMs, organizations can benefit from a full-stack attack path analysis software supply chain solution that delivers continuous third-party application asset discovery and dynamic tracking of third-party vendors. Organizations can automatically categorize assets under known vendors, allow customers to add additional new vendors, curate individual assets under any vendor, and alert on increases in policy violations and high embed rates of third-party vendors within key applications.

What is the difference between Shadow APIs and Zombie APIs, and how do they threaten organizations’ attack surface?

Shadow APIs are APIs that are used by developers or business units without the knowledge or approval of IT security teams. These APIs can be created by anyone with the technical knowledge to build them, and because they are not managed by the IT department they are often not subject to the same security controls and governance policies as officially sanctioned APIs. These APIs are not properly vetted, tested and secured, and they can pose a significant risk to the organization.

Zombie APIs are APIs that are no longer in use but are still active on the network and running in the cloud. These APIs can be left over from legacy systems, previous versions of the API, or retired applications; or they may have been created by developers who have since left the organization. Zombie APIs can be particularly dangerous because they may not be monitored or secured, making them vulnerable to exploitation. Attackers can use these APIs to gain unauthorized access to sensitive data, bypass security controls, and launch lateral movement attacks against other systems on the network.

Describe briefly Data Theorem’s approach to securing cloud-native applications and APIs.

Data Theorem’s broad AppSec portfolio protects organizations from data breaches with application security testing and protection for modern web frameworks, API-driven microservices and cloud resources. Our solutions are powered by our award-winning Analyzer Engine which leverages a new type of dynamic and runtime analysis that is fully integrated into the CI/CD process, and enables organizations to conduct continuous, automated security inspection and remediation. Data Theorem is one of the first vendors to provide a full stack application security analyzer that connects attack surfaces of applications starting at the client layers found in mobile and web, the network layers found in APIs, and the infrastructure layers found in cloud services.

Data Theorem’s API Security product inventories and hacks all APIs so it can remediate security issues within the CI pipeline. Our Cloud Secure is a Cloud-Native Application Protection Platform (CNAPP) with attack surface management (ASM) and a complete AppSec suite all-in-one. Finally, the Mobile Secure platform helps teams find and resolve critical security vulnerabilities across their entire mobile application tech stack by performing continuous dynamic runtime analysis on each release.

How does machine learning (ML) and artificial intelligence (AI) come into play in your solutions to help protect apps and APIs?

AI and ML play a significant role in enhancing the capabilities of Data Theorem’s security products. By leveraging these technologies, Data Theorem can provide more advanced and effective security solutions.

For example, AI and ML algorithms can enhance and help to harden code samples with security best-practices across a variety of modern languages – such as Node.js, Python, Java, Go Lang, Rust, Objective C, Swift, and many more – helping customers apply shift-left security practices early in CI/CD cycle. AI and ML algorithms can also analyze vast amounts of data, identify patterns, and detect anomalies or potential security threats. This enables Data Theorem’s products to detect and mitigate various types of attacks, such as malicious activity or suspicious behavior.

In addition, AI and ML techniques can automate the analysis of software vulnerabilities by examining code patterns, data flows and configurations. This helps Data Theorem’s products identify potential weaknesses and provide insights on how to address them. ML models can also learn from normal user behavior and establish baseline profiles. By continuously monitoring user activities and comparing them against these profiles, Data Theorem’s products can identify deviations that may indicate unauthorized access or compromised accounts.

Can you share any success stories or examples of organizations that have greatly benefited from implementing Data Theorem’s solutions?

There are many customer case studies published on Data Theorem’s website but two success stories that are worth examining closely are related to the benefits of building a strong API security program. First, AppLovin has developed the world’s most accurate ad placement engine based on user behavior and predictive analytics using machine learning. AppLovin’s core technology is delivered through APIs and proprietary SDKs targeted at entertainment and gaming applications on mobile devices. With Data Theorem, AppLovin ensures their first-party APIs across their software stack and their 3rd party APIs across their software supply chains are continuously inventoried, tested for security exploits, and hardened with security best-practices. Second, a Fortune 50 financial services organization has built an API security program along with a Cloud-Native Application Protection Platform (CNAPP) that has won awards for the advanced capabilities it has architected within its CI/CD workflow and hybrid cloud environment. In both cases, these API security programs powered by Data Theorem have discovered and illuminated Shadow APIs, eliminated the wasteful costs of Zombie APIs, and most importantly prevented data breaches from occurring across their cloud-applications running in production.

Are there any plans for future developments or enhancements to Data Theorem’s product offerings that you can share with us?

Data Theorem always has a plethora of innovations slated in its product roadmap. Without disclosing specific details, we are planning to make additional improvements in the way our Analyzer Engine does continuous discovery and inventory, security testing, and runtime observability and protection. These are the core pillars of our underlying engine that powers all five of our products today.

Doug Dooley

Chief Operating Officer at Data Theorem

Doug Dooley is the Chief Operating Officer of Data Theorem. He heads up product strategy, marketing, sales, and customer success teams. Before joining Data Theorem, Dooley worked in venture capital leading investments of cloud-centric security, machine-learning, and infrastructure startups for Venrock. While at Venrock, Dooley served on the boards of Evident.io (Palo Alto Networks), Niara (HPE), and VeloCloud (VMware). Prior to Venrock, Dooley spent almost two decades as an entrepreneur and technology executive at some of the most innovative and market dominant technology infrastructure companies – ranging from large corporations such as Cisco and Intel to security and virtualization startups such as Neoteris, NetScreen, and RingCube. Earlier in his career, he held various management, engineering, sales, and marketing roles at Juniper Networks, Inktomi, and Nortel Networks. Dooley earned a B.S. in Computer Engineering from Virginia Tech.

The post AITech Interview with Doug Dooley, Data Theorem first appeared on AI-Tech Park.

]]>