Deep learning - AI-Tech Park https://ai-techpark.com AI, ML, IoT, Cybersecurity News & Trend Analysis, Interviews Fri, 30 Aug 2024 05:03:03 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.16 https://ai-techpark.com/wp-content/uploads/2017/11/cropped-ai_fav-32x32.png Deep learning - AI-Tech Park https://ai-techpark.com 32 32 Noetik Secures $40 Million Series A Financing https://ai-techpark.com/noetik-secures-40-million-series-a-financing/ Thu, 29 Aug 2024 15:00:00 +0000 https://ai-techpark.com/?p=178104 Noetik, an AI-native biotech company leveraging self-supervised machine learning and high-throughput spatial data to develop next-generation cancer therapeutics, announced today that it closed an oversubscribed $40 million Series A financing round. The financing was led by Polaris Partners and managing partner Amy Schulman, who will join the board of directors,...

The post Noetik Secures $40 Million Series A Financing first appeared on AI-Tech Park.

]]>
Noetik, an AI-native biotech company leveraging self-supervised machine learning and high-throughput spatial data to develop next-generation cancer therapeutics, announced today that it closed an oversubscribed $40 million Series A financing round.

The financing was led by Polaris Partners and managing partner Amy Schulman, who will join the board of directors, with participation from new investors Khosla Ventures, Wittington Ventures and Breakout Ventures. The round was supported by all existing investors DCVC, Zetta Venture Partners, Catalio Capital Management, 11.2 Capital, Epic Ventures, Intermountain Ventures and North South Ventures. The round also included AI funds ApSTAT Technologies, Linearis Labs and Ventures Fund, supported by leading AI expert Yoshua Bengio and metabolomic expert David Wishart, Element AI co-founder Jean-Francois Gagne, and current and former Recursion executives.

Funds from the Series A financing will be used to expand Noetik’s spatial omics-based atlas of human cancer biology (already one of the world’s largest) together with its high throughput in vivo CRISPR Perturb-Map platform. Additionally, the investment will enable the company to scale training of its multi-modal cancer foundation models such as OCTO. The company will leverage these platform capabilities to advance an innovative pipeline of cancer therapeutics candidates to the clinic.

“We are thrilled to have the support of incredible investors who share our vision of combining deep patient data and artificial intelligence to build the future of cancer therapeutics. This significant financing will enable us to accelerate our progress toward turning biological insights into a portfolio of therapeutic candidates” said Ron Alfa, M.D., Ph.D., CEO & Co-Founder, Noetik.

Noetik was founded to solve critically important challenges in bringing effective new therapeutics to patients: improving target discovery and biomarker development to increase the probability of clinical success. To address these, the company has built a discovery and development platform that pairs human multimodal spatial omics data purpose-built for machine learning with a massively multiplexed in vivo CRISPR perturbation platform (Perturb-Map). Together these data are used to train self-supervised foundation models of tissue and tumor biology that power the company’s discovery efforts.

“We are excited to partner with Noetik and support their mission to build a pipeline of potentially transformative cancer programs,” said Amy Schulman, Managing Partner, Polaris Partners. “We have been investing in the most innovative life science technologies for decades and have been excited about the potential of AI. Noetik impressed us both with the sophistication of their platform and the team’s dedication to make an impact for patients.”

The company aims to establish strategic partnerships and collaborations with leading academic institutions, health care providers, and pharmaceutical companies. The company recently appointed Shafique Virani, M.D. as the company’s Chief Business Officer to spearhead these partnering efforts.

“We are thrilled to continue backing Noetik. The team’s speed of execution in building one of the most sophisticated AI-enabled oncology discovery engines in less than two years is unprecedented, and their deep experience and demonstrable progress have only strengthened our conviction,” said James Hardiman, General Partner, DCVC.

Noetik is committed to advancing the field of precision oncology and improving outcomes for cancer patients worldwide. This Series A funding marks a significant milestone in the company’s journey and reinforces its position as a leader in the development of AI-driven cancer therapies.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post Noetik Secures $40 Million Series A Financing first appeared on AI-Tech Park.

]]>
Overcoming the Limitations of Large Language Models https://ai-techpark.com/limitations-of-large-language-models/ Thu, 29 Aug 2024 13:00:00 +0000 https://ai-techpark.com/?p=178040 Discover strategies for overcoming the limitations of large language models to unlock their full potential in various industries. Table of contents Introduction 1. Limitations of LLMs in the Digital World 1.1. Contextual Understanding 1.2. Misinformation 1.3. Ethical Considerations 1.4. Potential Bias 2. Addressing the Constraints of LLMs 2.1. Carefully Evaluate...

The post Overcoming the Limitations of Large Language Models first appeared on AI-Tech Park.

]]>
Discover strategies for overcoming the limitations of large language models to unlock their full potential in various industries.

Table of contents
Introduction
1. Limitations of LLMs in the Digital World
1.1. Contextual Understanding
1.2. Misinformation
1.3. Ethical Considerations
1.4. Potential Bias
2. Addressing the Constraints of LLMs
2.1. Carefully Evaluate
2.2. Formulating Effective Prompts
2.3. Improving Transparency and Removing Bias
Final Thoughts

Introduction 

Large Language Models (LLMs) are considered to be an AI revolution, altering how users interact with technology and the world around us. Especially with deep learning algorithms in the picture data, professionals can now train huge datasets that will be able to recognize, summarize, translate, predict, and generate text and other types of content.

As LLMs become an increasingly important part of our digital lives, advancements in natural language processing (NLP) applications such as translation, chatbots, and AI assistants are revolutionizing the healthcare, software development, and financial industries.

However, despite LLMs’ impressive capabilities, the technology has a few limitations that often lead to generating misinformation and ethical concerns.

Therefore, to get a closer view of the challenges, we will discuss the four limitations of LLMs devise a decision to eliminate those limitations, and focus on the benefits of LLMs. 

1. Limitations of LLMs in the Digital World

We know that LLMs are impressive technology, but they are not without flaws. Users often face issues such as contextual understanding, generating misinformation, ethical concerns, and bias. These limitations not only challenge the fundamentals of natural language processing and machine learning but also recall the broader concerns in the field of AI. Therefore, addressing these constraints is critical for the secure and efficient use of LLMs. 

Let’s look at some of the limitations:

1.1. Contextual Understanding

LLMs are conditioned on vast amounts of data and can generate human-like text, but they sometimes struggle to understand the context. While humans can link with previous sentences or read between the lines, these models battle to differentiate between any two similar word meanings to truly understand a context like that. For instance, the word “bark” has two different meanings; one “bark” refers to the sound a dog makes, whereas the other “bark” refers to the outer covering of a tree. If the model isn’t trained properly, it will provide incorrect or absurd responses, creating misinformation.

1.2. Misinformation 

Even though LLM’s primary objective is to create phrases that feel genuine to humans; however, at times these phrases are not necessarily to be truthful. LLMs generate responses based on their training data, which can sometimes create incorrect or misleading information. It was discovered that LLMs such as ChatGPT or Gemini often “hallucinate” and provide convincing text that contains false information, and the problematic part is that these models point their responses with full confidence, making it hard for users to distinguish between fact and fiction.

1.3. Ethical Considerations 

There are also ethical concerns related to the use of LLMs. These models often generate intricate information, but the source of the information remains unknown, hence questioning its transparency in its decision-making processes. To add to it, there is less clarity on the source of these datasets when trained, leading to creating deep fake content or generating misleading news.

1.4. Potential Bias

As LLMs are conditioned to use large volumes of texts from diverse sources, they also carry certain geographical and societal biases within their models. While data professionals have been rigorously working to keep the systems diplomatic, however, it has been observed that LLM-driven chatbots tend to be biased toward specific ethnicities, genders, and beliefs.

2. Addressing the Constraints of LLMs

Now that we have comprehended the limitations that LLMs bring along, let us peek at particular ways that we can manage them:

2.1. Carefully Evaluate  

As LLMs can generate harmful content, it is best to rigorously and carefully evaluate each dataset. We believe human review could be one of the safest options when it comes to evaluation, as it is judged based on a high level of knowledge, experience, and justification. However, data professionals can also opt for automated metrics that can be used to assess the performance of LLM models. Further, these models can also be put through negative testing methods, which break down the model by experimenting with misleading inputs; this method helps to pinpoint the model’s weaknesses.

2.2. Formulating Effective Prompts 

The way users phrase the prompts, the LLMs provide results, but with the help of a well-designed prompt, they can make huge differences and provide accuracy and usefulness while searching for answers. Data professionals can opt for techniques such as prompt engineering, prompt-based learning, and prompt-based fine-tuning to interact with these models.

2.3. Improving Transparency and Removing Bias

It might be a difficult task for data professionals to understand why LLMs make specific predictions, which leads to bias and fake information. However, there are tools and techniques available to enhance the transparency of these models, making their decisions more interpretable and responsible. Looking at the current scenario, IT researchers are also exploring new strategies for differential privacy and fairness-aware machine learning to address the problem of bias.

Final Thoughts

LLMs have been transforming the landscape of NLP by offering exceptional capabilities in interpreting and generating human-like text. Yet, there are a few hurdles, such as model bias, lack of transparency, and difficulty in understanding the output, that need to be addressed immediately. Fortunately, with the help of a few strategies and techniques, such as using adversarial text prompts or implementing Explainable AI, data professionals can overcome these limitations. 

To sum up, LLMs might come with a few limitations but have a promising future. In due course of time, we can expect these models to be more reliable, transparent, and useful, further opening new doors to explore this technological marvel.

Explore AITechPark for top AI, IoT, Cybersecurity advancements, And amplify your reach through guest posts and link collaboration.

The post Overcoming the Limitations of Large Language Models first appeared on AI-Tech Park.

]]>
MLPerf v4.1 Results Showcase Fast Innovation in Generative AI Systems https://ai-techpark.com/mlperf-v4-1-results-showcase-fast-innovation-in-generative-ai-systems/ Thu, 29 Aug 2024 08:57:00 +0000 https://ai-techpark.com/?p=178007 New mixture of experts benchmark tracks emerging architectures for AI models Today, MLCommons® announced new results for its industry-standard MLPerf® Inference v4.1 benchmark suite, which delivers machine learning (ML) system performance benchmarking in an architecture-neutral, representative, and reproducible manner. This release includes first-time results for a new benchmark based on...

The post MLPerf v4.1 Results Showcase Fast Innovation in Generative AI Systems first appeared on AI-Tech Park.

]]>
New mixture of experts benchmark tracks emerging architectures for AI models

Today, MLCommons® announced new results for its industry-standard MLPerf® Inference v4.1 benchmark suite, which delivers machine learning (ML) system performance benchmarking in an architecture-neutral, representative, and reproducible manner. This release includes first-time results for a new benchmark based on a mixture of experts (MoE) model architecture. It also presents new findings on power consumption related to inference execution.

MLPerf Inference v4.1

The MLPerf Inference benchmark suite, which encompasses both data center and edge systems, is designed to measure how quickly hardware systems can run AI and ML models across a variety of deployment scenarios. The open-source and peer-reviewed benchmark suite creates a level playing field for competition that drives innovation, performance, and energy efficiency for the entire industry. It also provides critical technical information for customers who are procuring and tuning AI systems.

The benchmark results for this round demonstrate broad industry participation, and includes the debut of six newly available or soon-to-be-shipped processors:

  • AMD MI300x accelerator (available)
  • AMD EPYC “Turin” CPU (preview)
  • Google “Trillium” TPUv6e accelerator (preview)
  • Intel “Granite Rapids” Xeon CPUs (preview)
  • NVIDIA “Blackwell” B200 accelerator (preview)
  • UntetherAI SpeedAI 240 Slim (available) and SpeedAI 240 (preview) accelerators

MLPerf Inference v4.1 includes 964 performance results from 22 submitting organizations: AMD, ASUSTek, Cisco Systems, Connect Tech Inc, CTuning Foundation, Dell Technologies, Fujitsu, Giga Computing, Google Cloud, Hewlett Packard Enterprise, Intel, Juniper Networks, KRAI, Lenovo, Neutral Magic, NVIDIA, Oracle, Quanta Cloud Technology, Red Hat, Supermicro, Sustainable Metal Cloud, and Untether AI.

“There is now more choice than ever in AI system technologies, and it’s heartening to see providers embracing the need for open, transparent performance benchmarks to help stakeholders evaluate their technologies,” said Mitchelle Rasquinha, MLCommons Inference working group co-chair.

New mixture of experts benchmark

Keeping pace with today’s ever-changing AI landscape, MLPerf Inference v4.1 introduces a new benchmark to the suite: mixture of experts. MoE is an architectural design for AI models that departs from the traditional approach of employing a single, massive model; it instead uses a collection of smaller “expert” models. Inference queries are directed to a subset of the expert models to generate results. Research and industry leaders have found that this approach can yield equivalent accuracy to a single monolithic model but often at a significant performance advantage because only a fraction of the parameters are invoked with each query.

The MoE benchmark is unique and one of the most complex implemented by MLCommons to date. It uses the open-source Mixtral 8x7B model as a reference implementation and performs inferences using datasets covering three independent tasks: general Q&A, solving math problems, and code generation.

“When determining to add a new benchmark, the MLPerf Inference working group observed that many key players in the AI ecosystem are strongly embracing MoE as part of their strategy,” said Miro Hodak, MLCommons Inference working group co-chair. “Building an industry-standard benchmark for measuring system performance on MoE models is essential to address this trend in AI adoption. We’re proud to be the first AI benchmark suite to include MoE tests to fill this critical information gap.”

Benchmarking Power Consumption

The MLPerf Inference v4.1 benchmark includes 31 power consumption test results across three submitted systems covering both datacenter and edge scenarios. These results demonstrate the continued importance of understanding the power requirements for AI systems running inference tasks, as power costs are a substantial portion of the overall expense of operating AI systems.

The Increasing Pace of AI Innovation

Today, we are witnessing an incredible groundswell of technological advances across the AI ecosystem, driven by a wide range of providers including AI pioneers; large, well-established technology companies; and small startups.

MLCommons would especially like to welcome first-time MLPerf Inference submitters AMD and Sustainable Metal Cloud, as well as Untether AI, which delivered both performance and power efficiency results.

“It’s encouraging to see the breadth of technical diversity in the systems submitted to the MLPerf Inference benchmark as vendors adopt new techniques for optimizing system performance such as vLLM and sparsity-aware inference,” said David Kanter, Head of MLPerf at MLCommons. “Farther down the technology stack, we were struck by the substantial increase in unique accelerator technologies submitted to the benchmark this time. We are excited to see that systems are now evolving at a much faster pace – at every layer – to meet the needs of AI. We are delighted to be a trusted provider of open, fair, and transparent benchmarks that help stakeholders get the data they need to make sense of the fast pace of AI innovation and drive the industry forward.”

View the Results

To view the results for MLPerf Inference v4.1, please visit the Datacenter and Edge benchmark results pages.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post MLPerf v4.1 Results Showcase Fast Innovation in Generative AI Systems first appeared on AI-Tech Park.

]]>
nOps Secures $30M Series A Funding https://ai-techpark.com/nops-secures-30m-series-a-funding/ Wed, 28 Aug 2024 17:15:00 +0000 https://ai-techpark.com/?p=177957 AI-powered FinOps platform is the only end-to-end solution that offers a holistic view to optimize and automatically reduce AWS spending nOps, the leading AWS cost optimization platform, today announced the closing of a $30 million Series A funding round led by Headlight Partners. nOps is empowering organizations across the globe...

The post nOps Secures $30M Series A Funding first appeared on AI-Tech Park.

]]>
AI-powered FinOps platform is the only end-to-end solution that offers a holistic view to optimize and automatically reduce AWS spending

nOps, the leading AWS cost optimization platform, today announced the closing of a $30 million Series A funding round led by Headlight Partners. nOps is empowering organizations across the globe to solve one of the largest IT challenges of the last decade – better understanding, controlling and reducing cloud spend. With the industry’s most comprehensive suite of visibility and automation tools for optimizing AWS cloud costs, the nOps platform is the only end-to-end solution that provides a holistic look into all factors of cloud optimization.

According to Gartner, worldwide end-user spending on public cloud services is forecast to grow by 20.4% in 2024, surpassing $675 billion. However, 30% of that spending is wasted on underutilized cloud resources, and 20% goes toward costly On-Demand pricing. This means organizations are leaving billions of dollars on the table. In fact, as many as 80% of companies report that they consistently go over budget on their cloud spend. nOps enables organizations to optimize AWS cloud costs to better align with strategic computing needs.

A FinOps Foundation member, nOps’ end-to-end platform, unlike point solutions, gives FinOps, DevOps, Engineering, and Finance teams complete visibility into their AWS costs. The platform uses artificial intelligence (AI) and machine learning (ML) to analyze compute needs and automatically optimize it for efficiency, reliability and cost. With awareness of all your AWS commitments and the AWS Spot market, nOps automatically fulfills your commitments and provisions additional compute to Spot. Further, with the rise in AI and generative AI specifically, cloud usage and costs are increasing. The nOps platform makes it easy to track and allocate AI workloads. nOps helps its clients manage more than $1.5 billion of AWS cloud spend, and has grown its customer base by 450% over the past 18 months.

“Cloud usage, particularly with the emergence of compute-heavy AI workloads, has reached a tipping point. While various point solutions address specific cloud optimization needs, engineering teams do not have the time to manually manage and optimize the ever-growing complexity of cloud resources. Instead, they need one solution that provides complete visibility into cloud spend, automatic optimization and single-click cloud waste clean up so they can focus on innovation to drive company growth. This is why we founded nOps and why we have been so successful,” said JT Giri, CEO and founder of nOps. “With the support from Headlight Partners and our other investors, this funding will help us meet the growing demand for our FinOps platform. By empowering our customers to reliably optimize their AWS cloud usage and costs, while increasing productivity for developers and engineers, nOps is turning IT back into an innovation driver – not a cost center.”

By automatically optimizing an organization’s compute resources and spending, the nOps platform is different from other cloud and spend management offerings. The platform features three distinct solutions that deliver a more comprehensive approach to controlling AWS cloud spending, including:

  • Business Contexts provides visibility into all AWS spending, from the largest resources to container costs – it automates and simplifies AWS cost allocation and reporting.
  • Compute Copilot intelligently manages and optimizes autoscaling technologies to ensure the greatest efficiency and stability at the lowest costs.
  • Cloud Optimization Essentials automates time-consuming cloud cost optimization tasks, including resource scheduling and rightsizing, stopping idle instances, and optimizing Amazon Elastic Block Storage (EBS) volumes.

“nOps has built a proven platform that its customers love and we are thrilled to partner with the company on its next phase of growth,” said Jack Zollicoffer, Co-Founder at Headlight Partners. “We see organizations struggle to rein in AWS cloud spending. nOps brings a unique, more holistic approach that marries optimizing cloud cost while ensuring reliable availability of compute services. This provides its nOps customers with the confidence that they’ll never pay more than necessary for the cloud services required to run their business.”

The new capital will be used to accelerate the development of nOps’ industry-leading FinOps platform, further expand integrations with AWS products and open-source technologies like Karpenter, and improve the customer experience.

nOps seamlessly integrates and automatically optimizes Amazon Elastic Kubernetes Service (EKS), Amazon EC2 Auto Scaling Groups (ASG), Amazon Elastic Container Service (ECS), and Karpenter – setting it apart in the market.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post nOps Secures $30M Series A Funding first appeared on AI-Tech Park.

]]>
Orby & Databricks to Revolutionize GenAI Automation for the Enterprise https://ai-techpark.com/orby-databricks-to-revolutionize-genai-automation-for-the-enterprise/ Wed, 28 Aug 2024 15:15:00 +0000 https://ai-techpark.com/?p=177933 Orby AI (Orby), a technology trailblazer in generative AI solutions for the enterprise, today announced that it has partnered with Databricks, the Data and AI company, to empower a new era of enterprise automation powered by the industry’s first Large Action Model (LAM) from Orby. Orby has now joined Databricks’ Built...

The post Orby & Databricks to Revolutionize GenAI Automation for the Enterprise first appeared on AI-Tech Park.

]]>
Orby AI (Orby), a technology trailblazer in generative AI solutions for the enterprise, today announced that it has partnered with Databricks, the Data and AI company, to empower a new era of enterprise automation powered by the industry’s first Large Action Model (LAM) from Orby.

Orby has now joined Databricks’ Built On Partner Program and is leveraging Databricks Mosaic AI to pretrain, build, deploy and monitor its innovative Large Action Model, ActIO, a deep learning model able to interpret actions and perform complex tasks based on user inputs.

“As the demand for data intelligence increases, Orby’s AI innovations are a real game changer in enabling enterprise automations that require truly cognizant reasoning,” said Naveen Rao, VP of Generative AI at Databricks.

“Orby’s unique LAM approach gives organizations the ability to complete tasks with increasing complexity and variability, easily automating complex tasks that, until now, just hasn’t been possible or practical,” Rao concluded.

LARGE ACTION MODEL TAKING CENTER STAGE

Unlike conventional Large Language Model (LLMs) approaches, which focus on interpreting language and generating responses, Orby’s unique Large Action Model (LAM), observes actions to automate tasks and make decisions.

Orby’s LAM simply observes a user at work, learns what can be automated, and creates the actions to implement it. Users then approve the process and can modify the actions at any time, this allows continuous improvement as Orby learns more.

Making generative AI truly useful for the enterprise requires incredible amounts of variable inputs to enable rapid contextual reasoning. Today’s open source and proprietary LLMs are trained on massive amounts of data, but of only one modality: language. Other multimodal models may allow for variable inputs but lack the complex planning and visual grounding capabilities necessary to translate these inputs into enterprise-ready actions that reason, adjust, continuously learn and improve. Large Action Models are uniquely suited to empower enterprise efficiency, but first must be trained on massive amounts of data across multiple modalities.

“Databricks Mosaic AI makes it possible to build a multimodal training pipeline at a scale that is essential for delivering unrivaled performance, accuracy and stability,” said Will Lu, Co-Founder and CTO of Orby.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post Orby & Databricks to Revolutionize GenAI Automation for the Enterprise first appeared on AI-Tech Park.

]]>
Codenotary Announces First Half of FY 2024 With Record Sales Growth https://ai-techpark.com/codenotary-announces-first-half-of-fy-2024-with-record-sales-growth/ Wed, 28 Aug 2024 08:29:03 +0000 https://ai-techpark.com/?p=177861 Company expands rapidly into banking, government, and defense verticals with best-in-class supply chain security platform Codenotary, leaders in software supply chain protection, today announced that the company has closed the first half of 2024 with record sales and strong expansion into the financial services and government segments, as well as...

The post Codenotary Announces First Half of FY 2024 With Record Sales Growth first appeared on AI-Tech Park.

]]>
Company expands rapidly into banking, government, and defense verticals with best-in-class supply chain security platform

Codenotary, leaders in software supply chain protection, today announced that the company has closed the first half of 2024 with record sales and strong expansion into the financial services and government segments, as well as into the defense vertical with best-in-class supply chain security platform, Trustcenter.

“The first half of 2024 saw Codenotary close several multi-year agreements with world-class customers in financial and government sectors. We grew equally in the U.S. and European markets. Overall, we are very pleased with the number of new customers and the 140% sales growth in the U.S. and Europe. After a profitable 2023, we achieved further margin expansion in the first half of 2024. Some of our customers now secure billions of artifacts in their DevOps environments,” said Moshe Bar, CEO of Codenotary.

Codenotary secured so far $25 million in financing, and added customers that include some of the largest banks in the U.S. and Europe, along with government, pharmaceutical, industrial, and defense clients. Helping the company grow in every region, Codenotary has a network of 80 resellers in the U.S. and Europe.

“Large-scale organizations have realized the need to secure their software supply chain given the pervasive attacks of the last few years targeting software components. The biggest threat to the application security of organizations today comes from within. It’s too easy to import malicious code through a quick library import in a project. It takes mere seconds to infect the DevOps environment,” said Dennis Zimmer, co-founder and CTO of Codenotary.

“The British Army has extremely high standards for excellence and security in our computing environment. Codenotary has been responsive to our needs, and their product fits our stringent requirements,” said Captain D. Preuss, British Army.

In terms of product development, the company made three releases of its flagship product, Trustcenter, including version 4.8 which became the first such platform to add machine learning to automate the process of recognizing security issues which are exploitable in the customer’s environment. Codenotary’s Trustcenter is typically used as part of an organization’s compliance, auditing, and forensics activity to maintain a secure software supply chain. Significantly, it increases overall application security by enforcing that only trusted and approved artifacts are built into apps.

Codenotary is also the primary maintainer of immudb, the first and only open-source enterprise-class immutable database with data permanence at scale for demanding applications — up to billions of transactions per day. There have been more than 40 million downloads of immudb to date, which serves as the foundation for Codenotary’s supply chain security products. Thousands of organizations worldwide use immudb to secure their data from tampering rather than cumbersome and complex blockchain technologies.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post Codenotary Announces First Half of FY 2024 With Record Sales Growth first appeared on AI-Tech Park.

]]>
HiddenLayer Announces Mike Bruchanski as Chief Product Officer https://ai-techpark.com/hiddenlayer-announces-mike-bruchanski-as-chief-product-officer/ Tue, 27 Aug 2024 13:45:00 +0000 https://ai-techpark.com/?p=177734 HiddenLayer today announced the appointment of Mike Bruchanski as Chief Product Officer. Bruchanski brings over two decades of product and engineering experience to HiddenLayer, where he will drive the company’s product strategy and pipeline, and accelerate its mission to support customers’ adomption of generative and predictive AI. “Mike’s breadth of experience across the...

The post HiddenLayer Announces Mike Bruchanski as Chief Product Officer first appeared on AI-Tech Park.

]]>
HiddenLayer today announced the appointment of Mike Bruchanski as Chief Product Officer. Bruchanski brings over two decades of product and engineering experience to HiddenLayer, where he will drive the company’s product strategy and pipeline, and accelerate its mission to support customers’ adomption of generative and predictive AI.

“Mike’s breadth of experience across the B2B enterprise software lifecycle will be critical as HiddenLayer executes on its mission to protect the machine learning models behind today’s most important products,” said Chris Sestito, CEO and Co-founder of HiddenLayer. “His expertise will play a key role in accelerating our product roadmap and enhancing our ability to defend enterprises’ AI models against various threats.”

Bruchanski joins HiddenLayer from Elementary, where he was Vice President of Product, driving the advancement of the company’s offerings and market growth. Previously, he held similar roles at Blue Lava, Inc., where he shaped the product vision and strategy, and at Cylance, where he managed the company’s portfolio of OEM products and partners.

With a strong foundation in engineering, holding degrees from Villanova University and Embry-Riddle Aeronautical University, Mike combines a technical background with experience in scaling organizations’ product strategies. His leadership will be invaluable as HiddenLayer continues to innovate and protect AI-driven systems.

“The acceleration of AI has introduced new vulnerabilities and risks in cybersecurity. I’m excited to join the talented team at HiddenLayer to develop solutions that meet the complex challenges facing enterprise customers today,” said Bruchanski.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post HiddenLayer Announces Mike Bruchanski as Chief Product Officer first appeared on AI-Tech Park.

]]>
Waystar Wins Stevie® Awards in two categories https://ai-techpark.com/waystar-wins-stevie-awards-in-two-categories/ Tue, 20 Aug 2024 09:00:00 +0000 https://ai-techpark.com/?p=176945 Company awarded several distinctions for healthcare payments software at 2024 International Business Awards® Waystar Holding Corp. (Nasdaq: WAY), a provider of leading healthcare payment software, today announced that it has been recognized with a series of Stevie® Awards as part of the 2024 International Business Awards® (IBAs). “We are grateful to...

The post Waystar Wins Stevie® Awards in two categories first appeared on AI-Tech Park.

]]>
Company awarded several distinctions for healthcare payments software at 2024 International Business Awards®

Waystar Holding Corp. (Nasdaq: WAY), a provider of leading healthcare payment software, today announced that it has been recognized with a series of Stevie® Awards as part of the 2024 International Business Awards® (IBAs).

“We are grateful to be recognized and pleased that these awards acknowledge the positive impact of our cloud-based software platform,” said Matt Hawkins, Chief Executive Officer of Waystar. “Maximizing AI’s potential is top of mind for healthcare leaders. Waystar is focused on harnessing the power of AI to make payments faster, more accurate, and more efficient for providers, and more convenient for the patients these providers serve.”

Waystar was honored in four categories spanning the healthcare and software industries, including the Gold Stevie® Award for Company of the Year in the IBA’s healthcare category and a Gold Stevie® Award for the best Payments Solution.

The IBAs also recognized Waystar’s AI capabilities with a Silver Stevie® Award for delivering advanced AI and machine learning healthcare applications, and a second Silver Stevie® Award for digital automation. Waystar’s history of leveraging AI to enhance clients’ efficiency has accelerated, including through its recently announced engagement with Google Cloud to develop several generative AI applications on Waystar’s platform.

These recognitions, for which the company was selected by a panel of hundreds of business executives from more than 3,600 overall nominations, underscore Waystar’s efforts to drive innovation and value with advanced healthcare payment software. The distinctions follow other 2024 industry honors for Waystar, including receiving the Healthcare Payments Innovation Award from MedTech Breakthrough and the highest healthcare payments software ranking across five categories by Black Book™.

The company will showcase its latest innovations at Waystar’s True North conference at Disney’s Yacht & Beach Club Resort in Lake Buena Vista, Florida from September 9-11.

“We’ve long considered The International Business Awards to be the ‘Olympics for the workplace,’ and this year’s competition is the best-ever proof of that,” said Stevie Awards president Maggie Miller. “The winners have demonstrated that their organizations have set and achieved lofty goals. We congratulate them on their recognized achievements, and look forward to celebrating them.”

The International Business Awards are the world’s premier business awards program. All individuals and organizations worldwide – public and private, for-profit and non-profit, large and small – are eligible to submit nominations. The 2024 IBAs received entries from organizations in 62 nations and territories.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post Waystar Wins Stevie® Awards in two categories first appeared on AI-Tech Park.

]]>
CareNu Awarded NCQA Health Equity Accreditation https://ai-techpark.com/carenu-awarded-ncqa-health-equity-accreditation/ Mon, 19 Aug 2024 16:00:00 +0000 https://ai-techpark.com/?p=176892 Recognized for decreasing social risk among Black and Hispanic individuals by 31% and 33%, CareNu leads in health equity CareNu, a data-driven healthcare organization committed to delivering accessible, affordable and high-quality care, announced today that it has earned Health Equity Accreditation from the National Committee for Quality Assurance (NCQA). This...

The post CareNu Awarded NCQA Health Equity Accreditation first appeared on AI-Tech Park.

]]>
Recognized for decreasing social risk among Black and Hispanic individuals by 31% and 33%, CareNu leads in health equity

CareNu, a data-driven healthcare organization committed to delivering accessible, affordable and high-quality care, announced today that it has earned Health Equity Accreditation from the National Committee for Quality Assurance (NCQA). This recognition highlights CareNu’s leadership in the health equity space and the transformative improvements in health outcomes achieved for patients through ASSURITY DCE, its ACO REACH care model offering for individuals facing advanced age and complex medical conditions.

“NCQA’s Health Equity Accreditation illustrates CareNu’s steadfast commitment to addressing healthcare equity and disparities,” said Paola Bianchi Delp, president of CareNu. “While others took a gradual, test-the-waters approach, CareNu fully committed to health equity, and it paid off. Through advanced population health analysis and predictive analytics, we reach patients earlier, bringing together the often-fragmented spheres of healthcare into one cohesive system. At CareNu, we’re leading the way to a more equitable healthcare future and we’re grateful for NCQA’s efforts in recognizing organizations that are making a difference.”

CareNu’s comprehensive approach to medical care addresses the factors that impact patients’ day-to-day lives. Using data collection and analytics tools, ASSURITY DCE has developed a Social Determinant of Health (SDOH) framework that builds a complete picture of patient needs and risks, allowing it to individualize care for each patient. The predictive model uses machine learning algorithms to identify patients who are at a higher risk within a specific time frame, prompting healthcare providers to reach out to patients proactively for periodic care assessments, rather than waiting for individuals to identify issues and schedule visits themselves.

Prioritizing health equity, ASSURITY DCE achieved a 31% decrease in social risk among Black individuals, a 33% decrease in social risk among Hispanics individuals and a 35% decrease in total social risk scores from 2022-2024. Other milestones include a 52% reduction in emergency room visits for all patients between 2021-2023, as well as a 61% reduction in hospitalizations over that same period.

“Earning Health Equity Accreditation shows that an organization is making a breakthrough in providing excellent health care to diverse populations. I congratulate any organization for achieving this level of distinction,” said NCQA President Margaret E. O’Kane. “Eliminating racial and ethnic disparities in health care is essential to improving the quality of care overall.”

The NCQA’s Health Equity Accreditation evaluates how well an organization complies with standards in the following areas: organizational readiness; race/ethnicity, language, gender identity and sexual orientation; access to and availability of language services; practitioner network cultural responsiveness; culturally and linguistically appropriate services programs; and efforts to reduce reducing health care disparities. Of the more than 1,100 health plans accredited by NCQA, less than 200 have received the health equity accreditation.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post CareNu Awarded NCQA Health Equity Accreditation first appeared on AI-Tech Park.

]]>
Orby Expands Industry’s Leading Research, ML And Foundation Model Team https://ai-techpark.com/orby-expands-industrys-leading-research-ml-and-foundation-model-team/ Mon, 19 Aug 2024 14:15:00 +0000 https://ai-techpark.com/?p=176863 Expanding already impressive research and development team, will accelerate agentic AI innovations for the enterprise Orby AI (Orby), a technology trailblazer in generative AI solutions for the enterprise, announces key additions to research and development team, to drive accelerated innovation in its industry-leading agentic AI solution purpose-built for enterprise automation....

The post Orby Expands Industry’s Leading Research, ML And Foundation Model Team first appeared on AI-Tech Park.

]]>
Expanding already impressive research and development team, will accelerate agentic AI innovations for the enterprise

Orby AI (Orby), a technology trailblazer in generative AI solutions for the enterprise, announces key additions to research and development team, to drive accelerated innovation in its industry-leading agentic AI solution purpose-built for enterprise automation.

Orby’s founding research and development team has long been anchored by leaders and problem solvers deeply respected by both academia and companies at the forefront of AI innovation – with backgrounds from world-renowned institutions such as Stanford University, Massachusetts Institute of Technology (MIT), Google, Microsoft, Apple and Amazon. A few key members of the Orby founding R&D team include:

  • Fabian Chan – former Stanford graduate researcher, senior Google software engineer – is a proven machine learning (ML) team leader and AI practitioner, and has driven Orby innovation and creative problem solving.
  • Yanan Xie – former engineering lead for Google’s Cloud LLM powering Vertex AI Studio/PaLM – Google’s core serving stack for Generative AI initiatives – and Google’s Cloud Data Factory – the core data engine for Google Cloud AI initiatives. Yanan has earned several esteemed honors, including first prize in the National Olympiad in Informatics in Province (NOIP).
  • Chunliang Lyu – former Google Cloud Knowledge Graph engineering lead, PhD from the Chinese University of Hong Kong – is a serial entrepreneur and R&D leader with extensive experience in knowledge management and infrastructure development. Chunliang is a member of the Conference on Information and Knowledge Management (CIKM) Program Committee, and a reviewer for KDD, SIGIR, ACL, and Neural Networks, among other honors.

“The core research and development team at Orby is without peer,” said Will Lu, Co-Founder and CTO at Orby. “We have tackled some of the most complex challenges in the enterprise AI universe, and have developed the first agentic AI foundation model that is purpose-built to deliver solutions to these specific challenges. Along with these challenges and successes comes the ability to attract some of the most in-demand talent to join our team,” concluded Lu.

As a result, Orby is expanding its team, adding some of the most prominent, experienced, and respected research professionals in the industry.

  • Peng Qi, earned a PhD in Computer Science from Stanford University, one of the most prominent Natural Language Processing (NLP) programs in the world, and is a recognized leader in the field of NLP, ML and multimodal agent research. Peng has received numerous accolades, including the inaugural Yufan Award, Rising Star, from the 2020 World Artificial Intelligence Conference, and was endowed with a Facebook ParlAI Research Award. He has co-lead the development of HotpotQA and BeerQA, top benchmarks for complex reasoning and knowledge-intensive agentic systems, and Stanza, a high-efficiency NLP library for 80 human languages. Peng has also published 30+ research works in top conferences and journals including ACL and EMNLP, and regularly serves on the Senior Program Committee. Peng joined Orby in June 2024 from Amazon, where he led applied research on the Amazon Q team.
  • Ignacio Cases, earned a PhD in Linguistics (Computational Linguistics/NLP) from Stanford University, and has many years of service in research at Stanford and MIT focused on Deep Reinforcement Learning applied to language. Ignacio received the prestigious IBM PhD Fellowship Award, an international recognition of demonstrated expertise in pioneering research in technology. Ignacio has been a member of the Center for Brains, Minds and Machines (CBMM), a National Science Foundation (NSF) funded interdisciplinary research center at MIT and Harvard focused on understanding intelligence, both biological and artificial. In addition, Ignacio has co-authored 30+ papers in leading conferences, served on different academic committees, and is a frequent reviewer for NeurIPS, ICLR and other conferences. Ignacio most recently served as Postdoctoral Associate, Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. Ignacio will formally join Orby in September 2024.

“I am extremely excited to be joining such an impressive team,” said Peng Qi, Research Lead for Orby. “In Orby, I see an organization that has tackled and solved some incredible challenges in enterprise AI development. And I see immense opportunity to further develop solutions that are not only technically innovative, but can also fundamentally change the way enterprise teams perform.”

With these additions, Orby is continuing to lead the application of innovative AI solutions purpose-built for the enterprise. The further development of its patented Large Action Model (LAM), ActIO, and its agentic-AI platform for enterprise automation, leads the charge in bringing automation to tasks that legacy solutions simply could not address.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post Orby Expands Industry’s Leading Research, ML And Foundation Model Team first appeared on AI-Tech Park.

]]>