ai learning - AI-Tech Park https://ai-techpark.com AI, ML, IoT, Cybersecurity News & Trend Analysis, Interviews Fri, 30 Aug 2024 05:10:04 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.16 https://ai-techpark.com/wp-content/uploads/2017/11/cropped-ai_fav-32x32.png ai learning - AI-Tech Park https://ai-techpark.com 32 32 Innovaccer Provider Copilot Available on Oracle Healthcare Marketplace https://ai-techpark.com/innovaccer-provider-copilot-available-on-oracle-healthcare-marketplace/ Thu, 29 Aug 2024 15:45:00 +0000 https://ai-techpark.com/?p=178114 The Innovaccer copilot will be deployed on Oracle Cloud Infrastructure to automate clinical documentation, generate potential diagnoses, and identify quality and coding gaps at the point of care Innovaccer Inc., a leading provider of healthcare AI solutions and a member of Oracle Partner Network (OPN), today announced that its Provider Copilot...

The post Innovaccer Provider Copilot Available on Oracle Healthcare Marketplace first appeared on AI-Tech Park.

]]>
The Innovaccer copilot will be deployed on Oracle Cloud Infrastructure to automate clinical documentation, generate potential diagnoses, and identify quality and coding gaps at the point of care

Innovaccer Inc., a leading provider of healthcare AI solutions and a member of Oracle Partner Network (OPN), today announced that its Provider Copilot is available on the Oracle Healthcare Marketplace and can be deployed on Oracle Cloud Infrastructure (OCI). The Oracle Healthcare Marketplace is a centralized repository of healthcare applications offered by Oracle and Oracle partners.

The deployment empowers healthcare providers to transfer AI-generated clinical notes into the patient record and address quality and coding gaps to improve care delivery. The Innovaccer Provider Copilot acts as a point-of-care assistant that helps reduce manual administrative work for healthcare providers:

  • Transcribes, analyzes, and generates clinical notes of the conversations between healthcare providers and patients.
  • Suggests potential diagnoses to consider based on the clinical notes, as well as select health data available in the Innovaccer healthcare data platform.
  • Summarizes the patient record prior to the patient visit, ensuring quick review with appropriate clinical context.
  • Flags quality and documentation gaps for providers participating in value-based care programs improving care delivery and more appropriate coding.
  • Overlays insights directly on the Cerner EHR with zero click activation.

The Innovaccer Provider Copilot improves the overall provider experience by reducing the administrative burden of clinical documentation. The solution enables healthcare providers to capture essential information during patient encounters, allowing them to prioritize their patients. Providers on the Oracle Cloud can access and benefit from the transformative power of AI at the point of care through its marketplace listing.

Oracle Healthcare Marketplace is a one-stop shop for Oracle customers seeking trusted healthcare applications. It offers unique clinical and business solutions that extend Oracle Health and Oracle Cloud Applications.

“Our Provider Copilot allows providers to spend more time with their patients by automating their clinical documentation workflows. Quality care takes time – time that shouldn’t be spent on administrative tasks. By reducing the burden of documentation and allowing patients to spend more quality time with their patients, we are helping providers rediscover the joy of care,” said Abhinav Shashank, cofounder and CEO of Innovaccer. “Innovaccer’s participation in the Oracle Healthcare Marketplace further extends our commitment to the Oracle community and enables customers to reap the benefits of the Provider Copilot in their native EHR workflow. We look forward to leveraging the power of the Oracle Cloud and Oracle Health technologies to help us achieve our business goals.”

Innovaccer’s AI-powered Provider Copilot is also available on athenahealth’s Marketplace, further validating our dedication to improving healthcare delivery through advanced technology.

To learn more about the Innovaccer Provider Copilot and its capabilities, visit the Innovaccer product listing on the Oracle Healthcare Marketplace. For details and support on implementation, please reach out to the Innovaccer support team.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post Innovaccer Provider Copilot Available on Oracle Healthcare Marketplace first appeared on AI-Tech Park.

]]>
Noetik Secures $40 Million Series A Financing https://ai-techpark.com/noetik-secures-40-million-series-a-financing/ Thu, 29 Aug 2024 15:00:00 +0000 https://ai-techpark.com/?p=178104 Noetik, an AI-native biotech company leveraging self-supervised machine learning and high-throughput spatial data to develop next-generation cancer therapeutics, announced today that it closed an oversubscribed $40 million Series A financing round. The financing was led by Polaris Partners and managing partner Amy Schulman, who will join the board of directors,...

The post Noetik Secures $40 Million Series A Financing first appeared on AI-Tech Park.

]]>
Noetik, an AI-native biotech company leveraging self-supervised machine learning and high-throughput spatial data to develop next-generation cancer therapeutics, announced today that it closed an oversubscribed $40 million Series A financing round.

The financing was led by Polaris Partners and managing partner Amy Schulman, who will join the board of directors, with participation from new investors Khosla Ventures, Wittington Ventures and Breakout Ventures. The round was supported by all existing investors DCVC, Zetta Venture Partners, Catalio Capital Management, 11.2 Capital, Epic Ventures, Intermountain Ventures and North South Ventures. The round also included AI funds ApSTAT Technologies, Linearis Labs and Ventures Fund, supported by leading AI expert Yoshua Bengio and metabolomic expert David Wishart, Element AI co-founder Jean-Francois Gagne, and current and former Recursion executives.

Funds from the Series A financing will be used to expand Noetik’s spatial omics-based atlas of human cancer biology (already one of the world’s largest) together with its high throughput in vivo CRISPR Perturb-Map platform. Additionally, the investment will enable the company to scale training of its multi-modal cancer foundation models such as OCTO. The company will leverage these platform capabilities to advance an innovative pipeline of cancer therapeutics candidates to the clinic.

“We are thrilled to have the support of incredible investors who share our vision of combining deep patient data and artificial intelligence to build the future of cancer therapeutics. This significant financing will enable us to accelerate our progress toward turning biological insights into a portfolio of therapeutic candidates” said Ron Alfa, M.D., Ph.D., CEO & Co-Founder, Noetik.

Noetik was founded to solve critically important challenges in bringing effective new therapeutics to patients: improving target discovery and biomarker development to increase the probability of clinical success. To address these, the company has built a discovery and development platform that pairs human multimodal spatial omics data purpose-built for machine learning with a massively multiplexed in vivo CRISPR perturbation platform (Perturb-Map). Together these data are used to train self-supervised foundation models of tissue and tumor biology that power the company’s discovery efforts.

“We are excited to partner with Noetik and support their mission to build a pipeline of potentially transformative cancer programs,” said Amy Schulman, Managing Partner, Polaris Partners. “We have been investing in the most innovative life science technologies for decades and have been excited about the potential of AI. Noetik impressed us both with the sophistication of their platform and the team’s dedication to make an impact for patients.”

The company aims to establish strategic partnerships and collaborations with leading academic institutions, health care providers, and pharmaceutical companies. The company recently appointed Shafique Virani, M.D. as the company’s Chief Business Officer to spearhead these partnering efforts.

“We are thrilled to continue backing Noetik. The team’s speed of execution in building one of the most sophisticated AI-enabled oncology discovery engines in less than two years is unprecedented, and their deep experience and demonstrable progress have only strengthened our conviction,” said James Hardiman, General Partner, DCVC.

Noetik is committed to advancing the field of precision oncology and improving outcomes for cancer patients worldwide. This Series A funding marks a significant milestone in the company’s journey and reinforces its position as a leader in the development of AI-driven cancer therapies.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post Noetik Secures $40 Million Series A Financing first appeared on AI-Tech Park.

]]>
Ontrak Health and MosaicVoice announced partnership https://ai-techpark.com/ontrak-health-and-mosaicvoice-announced-partnership/ Thu, 29 Aug 2024 14:45:00 +0000 https://ai-techpark.com/?p=178100 Ontrak, Inc. (NASDAQ: OTRK), a leading AI-powered and technology-enabled behavioral healthcare company, announced it has partnered with MosaicVoice, a pioneer in AI-powered voice technology, to transform healthcare delivery and patient outcomes for Ontrak and its members. The strategic partnership aims to create a more connected, intelligent, and patient-centric healthcare ecosystem...

The post Ontrak Health and MosaicVoice announced partnership first appeared on AI-Tech Park.

]]>
Ontrak, Inc. (NASDAQ: OTRK), a leading AI-powered and technology-enabled behavioral healthcare company, announced it has partnered with MosaicVoice, a pioneer in AI-powered voice technology, to transform healthcare delivery and patient outcomes for Ontrak and its members. The strategic partnership aims to create a more connected, intelligent, and patient-centric healthcare ecosystem by integrating advanced voice and AI technologies.

“By partnering with MosaicVoice, we are combining the best in AI-driven engagement with our expertise in healthcare to deliver more effective and efficient care,” said Brianna Brennan, Chief Innovation Officer at Ontrak Health. “This collaboration promotes a scalable and elevated patient experience that is intended to improve health outcomes. Incorporating this technology into our ecosystem further enables consistent delivery of our evidence-based model built upon the Comprehensive Healthcare Integration (CHI) framework.”

MosaicVoice’s AI technology offers real-time, dynamic guidance and conversation analysis, helping care teams maintain meaningful and compliant patient interactions. The solution actively listens to conversations, ensures adherence to care delivery protocols, and guides care teams with prompts that enhance patient engagement. This technology can detect patient sentiment, surface care opportunities, and provide immediate feedback to support care providers.

The partnership will leverage MosaicVoice’s advanced features, including:

  • Real-Time, Dynamic AI Guidance: Ensures all patient interactions are compliant and on-message while allowing personalized rapport building.
  • Post-Call Quality Assurance Automation: Uses AI to score 100% of interactions, identify care drivers, and automate call summaries, allowing care teams to focus on critical patient needs.
  • Performance Insights and Reporting: Offers customizable reporting to track engagement metrics, identify trends, and optimize care delivery based on real-time data.

“Combining our AI-driven voice solutions with Ontrak Health’s comprehensive behavioral health program platform will set a new standard for patient engagement,” said Julian McCarty, CEO of MosaicVoice. “Together, we’re driving a more proactive, responsive, and efficient approach to healthcare.”

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post Ontrak Health and MosaicVoice announced partnership first appeared on AI-Tech Park.

]]>
FOMO Drives AI Adoption in 60% of Businesses, but Raises Trust Issues https://ai-techpark.com/fomo-drives-ai-adoption-in-60-of-businesses-but-raises-trust-issues/ Thu, 29 Aug 2024 14:15:00 +0000 https://ai-techpark.com/?p=178093 Fear of Missing Out (FOMO) a key driver for AI uptake – even as trust in AI is high Trust in AI is highest in the US at 87%, while France lags at 77% Purpose-built AI considered the most trustworthy type of AI at 90% A new survey from intelligent...

The post FOMO Drives AI Adoption in 60% of Businesses, but Raises Trust Issues first appeared on AI-Tech Park.

]]>
  • Fear of Missing Out (FOMO) a key driver for AI uptake – even as trust in AI is high
  • Trust in AI is highest in the US at 87%, while France lags at 77%
  • Purpose-built AI considered the most trustworthy type of AI at 90%
  • A new survey from intelligent automation company ABBYY finds that fear of missing out (FOMO) plays a big factor in artificial intelligence (AI) investment, with 63% of global IT leaders reporting they are worried their company will be left behind if they don’t use it.

    With fears of being left behind so prevalent, it is no surprise that IT decision makers from the US, UK, France, Germany, Singapore, and Australia reported that average investment in AI exceeded $879,000 in the last year despite a third (33%) of business leaders having concerns about implementation costs. Almost all (96%) respondents in the ABBYY State of Intelligent Automation Report: AI Trust Barometer said they also plan to increase investment in AI in the next year, although Gartner predicts that by 2025, growth in 90% of enterprise deployments of GenAI will slow as costs exceed value.

    Furthermore, over half (55%) of business leaders admitted that another key driver for use of AI was pressure from customers.

    Surprisingly, the survey revealed another fear for IT leaders implementing AI was misuse by their own staff (35%). This came ahead of concerns about costs (33%), AI hallucinations and lack of expertise (both 32%), and even compliance risk (29%).

    Overall, respondents reported an overwhelmingly high level of trust in AI tools (84%). The most trustworthy according to decision makers were small language models (SLMs) or purpose-built AI (90%). More than half (54%) said they were already using purpose-built AI tools, such as intelligent document processing (IDP).

    Maxime Vermeir, Senior Director of AI Strategy at ABBYY, commented, “It’s no surprise to me that organizations have more trust in small language models due to the tendency of LLMs to hallucinate and provide inaccurate and possibly harmful outcomes. We’re seeing more business leaders moving to SLMs to better address their specific business needs, enabling more trustworthy results.”

    When asked about trust and ethical use of AI, an overwhelming majority (91%) of respondents are confident their company is following all government regulations. Yet only 56% say they have their own trustworthy AI policies, while 43% are seeking guidance from a consultant or non-profit. Half (50%) said they would feel more confident knowing their company had a responsible AI policy, while having software tools that can detect and monitor AI compliance was also cited as a confidence booster (48%).

    On a regional basis, levels of trust were highest among US respondents, with 87% saying they trust AI; Singapore came next at 86% followed by the UK and Australia, both 85%, then Germany at 83%. Lagging was France, with just 77% of respondents indicating they trust AI.

    The ABBYY State of Intelligent Automation Report gauged the level of trust and adoption of AI technologies across 1,200 IT decision makers in the UK, US, France, Germany, Australia and Singapore. The study was carried out June 3-12, 2024. Download the full report for additional details at https://digital.abbyy.com/state-of-intelligent-automation-ai-trust-barometer-2024-report-download.

    The results of the AI Trust Barometer survey and other topics about the impact of AI-powered automation will be discussed during Intelligent Automation Month; register today at https://www.abbyy.com/intelligent-automation-month/.

    Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

    The post FOMO Drives AI Adoption in 60% of Businesses, but Raises Trust Issues first appeared on AI-Tech Park.

    ]]>
    Adastra Awarded Major Recognition by AWS for Generative AI Excellence https://ai-techpark.com/adastra-awarded-major-recognition-by-aws-for-generative-ai-excellence/ Thu, 29 Aug 2024 14:00:00 +0000 https://ai-techpark.com/?p=178086 Adastra Group (also known as Adastra Corporation), a global leader in cloud, data, and artificial intelligence (AI) solutions and services, is proud to announce its achievement of a generative AI AWS competency badge. This accomplishment highlights Adastra’s commitment to innovation and excellence through our relationship with AWS. “Achieving the AWS...

    The post Adastra Awarded Major Recognition by AWS for Generative AI Excellence first appeared on AI-Tech Park.

    ]]>
    Adastra Group (also known as Adastra Corporation), a global leader in cloud, data, and artificial intelligence (AI) solutions and services, is proud to announce its achievement of a generative AI AWS competency badge. This accomplishment highlights Adastra’s commitment to innovation and excellence through our relationship with AWS.

    “Achieving the AWS generative AI competency badge is a landmark achievement for Adastra, underscoring our dedication to leveraging artificial intelligence to unlock business value for organizations. AI adoption is surging, with 72% (McKinsey, 2024)of enterprises already integrating AI and 65% regularly employing generative AI (McKinsey, 2024). As an AWS generative AI competency partner, we are proud to help organizations identify and implement high-value GenAI use cases. The fact that 3 out of 4 projects we deploy progress to production-grade solutions, where they deliver tangible business impact, is a testament to the value we bring to our clients through our relationship with AWS.” – Ondřej Vaněk, Chief AI Officer at Adastra.

    At Adastra, we specialize in assessing your organization’s current state and challenges to create tailored solutions and roadmaps for implementation. As a longstanding member of the AWS Partner Network (APN) with four separate AWS competencies and a position as an Advanced Services Partner, we excel in cloud computing, data analytics, and machine learning. Our skilled teams seamlessly integrate AWS technologies into your organization’s existing environment, empowering businesses to harness their full potential. Our commitment to transformative results establishes us a trusted partner in the dynamic landscape of cloud computing.

    Adastra’s thorough application process for the generative AI competency badge involved the development of a comprehensive generative AI strategy and governance methodology. This approach enables us to assess client readiness for generative AI, identify use cases, and evaluate the business value of generative AI projects. AWS recognized this meticulous assessment methodology and rewarded our efforts with this competency badge.

    With this achievement, Adastra now proudly stands as an AWS generative AI competency partner. This positions Adastra as a leader in driving innovation within generative AI technologies. With this status, Adastra will continue to set standards in developing and launching transformative applications across various industries, leveraging innovative AWS technologies to craft groundbreaking solutions, enhance productivity, deliver unique experiences, and accelerate innovation.

    Our use of AWS services and technologies drives significant cost savings and streamlines application development processes for organizations across diverse industries. Prospective clients can anticipate AWS-certified expertise, extensive industry experience, and successful project outcomes as we continue to deliver value and foster business growth through cutting-edge AWS generative AI solutions.

    Amazon Bedrock, a revolutionary generative AI solution, offers a spectrum of cutting-edge LLMs for easy development of GenAI applications with a focus on security, privacy, and responsible AI usage. It enables easy experimentation, customization, and task execution within enterprise systems and data sources, seamlessly integrating with familiar AWS services for deployment. Amazon Bedrock offers heightened security, complete data control, encryption features, and identity-based policies, among other benefits.

    Other generative AI solutions like Amazon Q and Amazon SageMaker Jumpstart offer additional capabilities for faster innovation and increased business value. Amazon Q serves as a generative AI–powered assistant for software development and can also be used as a fully managed chatbot. Meanwhile SageMaker Jumpstart enables building, training, and deploying machine learning (ML) models for a variety of use cases. Amazon QuickSight provides scalable business intelligence, enhancing productivity with generative AI capabilities like executive summaries and interactive data stories.

    Adastra remains committed to upholding rigid ethical standards and promoting responsible AI usage. As such, we actively adhere to an ethical generative AI policy and prioritize ethical use in all generative AI solutions and endeavors.

    Our achievement of this competency badge on the AWS platform underscores Adastra’s technical proficiency and commitment to meeting customer needs. Achieving the generative AI badge highlights our dedication to cloud solution excellence, granting Adastra AWS Specialization Partner status. As a trusted technical partner, we demonstrate unwavering commitment to excellence and customer success in navigating AI innovation. This achievement reaffirms our commitment to empowering clients and delivering high-quality cloud solutions through expertise in AWS services like generative AI.

    Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

    The post Adastra Awarded Major Recognition by AWS for Generative AI Excellence first appeared on AI-Tech Park.

    ]]>
    Palantir Named a Leader in AI/ML Platforms https://ai-techpark.com/palantir-named-a-leader-in-ai-ml-platforms/ Thu, 29 Aug 2024 13:30:00 +0000 https://ai-techpark.com/?p=178083 Palantir Technologies Inc. (NYSE: PLTR), a leading provider of AI software platforms, today announced it had been recognized as a Leader in artificial intelligence and machine learning (AI/ML) software platforms by renowned research and advisory firm Forrester. Palantir was among the select companies that Forrester invited to participate in “The...

    The post Palantir Named a Leader in AI/ML Platforms first appeared on AI-Tech Park.

    ]]>
    Palantir Technologies Inc. (NYSE: PLTR), a leading provider of AI software platforms, today announced it had been recognized as a Leader in artificial intelligence and machine learning (AI/ML) software platforms by renowned research and advisory firm Forrester.

    Palantir was among the select companies that Forrester invited to participate in “The Forrester Wave™: AI/ML Platforms, Q3 2024” report. Palantir was cited as a Leader in this research, with the highest ranking for Current Offering.

    As stated in the report: “Palantir has one of the strongest offerings in the AI/ML space with a vision and roadmap to create a platform that brings together humans and machines in a joint decision-making model. Its approach is to use its data pipelining capabilities and differentiated ontology to support the basis of its AI platform (AIP) offering… Palantir is quietly becoming one of the largest players in this market, seeing a consistent sustained growth rate in the past half decade by making its platform more accessible to users, investing in customer success, and embracing the support of multirole AI teams.”

    “Palantir AIP powers the most demanding use-cases across the public and private sector, and is uniquely designed to connect AI directly into frontline operations,” said Akshay Krishnaswamy, Palantir’s Chief Architect. “We believe that being named a Leader in this Forrester Wave evaluation validates our investments across model-agnostic Generative AI infrastructure, multimodal guardrails for human-AI teaming, the decision-centric Ontology — and the full range of other capabilities needed to take enterprises from AI prototype to production.”

    Palantir AIP provides the end-to-end architecture for enabling real-time, AI-driven decision-making. Together with Palantir Foundry and Palantir Apollo, AIP enables the “AI Mesh” architecture that is setting the standard for enterprises seeking to deliver composable, interoperable, and scalable value through AI. From public health to battery production, organizations depend on Palantir to safely, securely, and effectively leverage AI in their enterprises — and drive operational results.

    Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

    The post Palantir Named a Leader in AI/ML Platforms first appeared on AI-Tech Park.

    ]]>
    Cerebras Launches the World’s Fastest AI Inference https://ai-techpark.com/cerebras-launches-the-worlds-fastest-ai-inference/ Thu, 29 Aug 2024 09:27:40 +0000 https://ai-techpark.com/?p=178036 20X performance and 1/5th the price of GPUs- available today Developers can now leverage the power of wafer-scale compute for AI inference via a simple API Today, Cerebras Systems, the pioneer in high performance AI compute, announced Cerebras Inference, the fastest AI inference solution in the world. Delivering 1,800 tokens per second...

    The post Cerebras Launches the World’s Fastest AI Inference first appeared on AI-Tech Park.

    ]]>
    20X performance and 1/5th the price of GPUs- available today

    Developers can now leverage the power of wafer-scale compute for AI inference via a simple API

    Today, Cerebras Systems, the pioneer in high performance AI compute, announced Cerebras Inference, the fastest AI inference solution in the world. Delivering 1,800 tokens per second for Llama 3.1 8B and 450 tokens per second for Llama 3.1 70B, Cerebras Inference is 20 times faster than NVIDIA GPU-based solutions in hyperscale clouds. Starting at just 10c per million tokens, Cerebras Inference is priced at a fraction of GPU solutions, providing 100x higher price-performance for AI workloads.

    Unlike alternative approaches that compromise accuracy for performance, Cerebras offers the fastest performance while maintaining state-of-the-art accuracy by staying in the 16-bit domain for the entire inference run. Cerebras Inference is priced at a fraction of GPU-based competitors, with pay-as-you-go pricing of 10 cents per million tokens for Llama 3.1 8B and 60 cents per million tokens for Llama 3.1 70B.

    “Cerebras has taken the lead in Artificial Analysis’ AI inference benchmarks. Cerebras is delivering speeds an order of magnitude faster than GPU-based solutions for Meta’s Llama 3.1 8B and 70B AI models. We are measuring speeds above 1,800 output tokens per second on Llama 3.1 8B, and above 446 output tokens per second on Llama 3.1 70B – a new record in these benchmarks,” said Micah Hill-Smith, Co-Founder and CEO of Artificial Analysis.

    “Artificial Analysis has verified that Llama 3.1 8B and 70B on Cerebras Inference achieve quality evaluation results in line with native 16-bit precision per Meta’s official versions. With speeds that push the performance frontier and competitive pricing, Cerebras Inference is particularly compelling for developers of AI applications with real-time or high volume requirements,” Hill-Smith concluded.

    Inference is the fastest growing segment of AI compute and constitutes approximately 40% of the total AI hardware market. The advent of high-speed AI inference, exceeding 1,000 tokens per second, is comparable to the introduction of broadband internet, unleashing vast new opportunities and heralding a new era for AI applications. Cerebras’ 16-bit accuracy and 20x faster inference calls empowers developers to build next-generation AI applications that require complex, multi-step, real-time performance of tasks, such as AI agents.

    “DeepLearning.AI has multiple agentic workflows that require prompting an LLM repeatedly to get a result. Cerebras has built an impressively fast inference capability which will be very helpful to such workloads,” said Dr. Andrew Ng, Founder of DeepLearning.AI.

    AI leaders in large companies and startups alike agree that faster is better:

    “Speed and scale change everything,” said Kim Branson, SVP of AI/ML at GlaxoSmithKline, an early Cerebras customer.

    “LiveKit is excited to partner with Cerebras to help developers build the next generation of multimodal AI applications. Combining Cerebras’ best-in-class compute and SoTA models with LiveKit’s global edge network, developers can now create voice and video-based AI experiences with ultra-low latency and more human-like characteristics,” said Russell D’sa, CEO and Co-Founder of LiveKit.

    “For traditional search engines, we know that lower latencies drive higher user engagement and that instant results have changed the way people interact with search and with the internet. At Perplexity, we believe ultra-fast inference speeds like what Cerebras is demonstrating can have a similar unlock for user interaction with the future of search – intelligent answer engines,” said Denis Yarats, CTO and co-founder, Perplexity.

    “With infrastructure, speed is paramount. The performance of Cerebras Inference supercharges Meter Command to generate custom software and take action, all at the speed and ease of searching on the web. This level of responsiveness helps our customers get the information they need, exactly when they need it in order to keep their teams online and productive,” said Anil Varanasi, CEO of Meter.

    Cerebras has made its inference service available across three competitively priced tiers: Free, Developer, and Enterprise.

    • The Free Tier offers free API access and generous usage limits to anyone who logs in.
    • The Developer Tier, designed for flexible, serverless deployment, provides users with an API endpoint at a fraction of the cost of alternatives in the market, with Llama 3.1 8B and 70B models priced at 10 cents and 60 cents per million tokens, respectively. Looking ahead, Cerebras will be continuously rolling out support for many more models.
    • The Enterprise Tier offers fine-tuned models, custom service level agreements, and dedicated support. Ideal for sustained workloads, enterprises can access Cerebras Inference via a Cerebras-managed private cloud or on customer premise. Pricing for enterprises is available upon request.

    Strategic Partnerships to Accelerate AI Development Building AI applications requires a range of specialized tools at each stage, from open-source model giants to frameworks like LangChain and LlamaIndex that enable rapid development. Others like Docker, which ensures consistent containerization and deployment of AI-powered applications, and MLOps tools like Weights & Biases that maintain operational efficiency. At the forefront of innovation, companies like Meter are revolutionizing AI-powered network management, while learning platforms like DeepLearning.AI are equipping the next generation of developers with critical skills. Cerebras is proud to collaborate with these industry leaders, including Docker, Nasdaq, LangChain, LlamaIndex, Weights & Biases, Weaviate, AgentOps, and Log10 to drive the future of AI forward.

    Cerebras Inference is powered by the Cerebras CS-3 system and its industry-leading AI processor – the Wafer Scale Engine 3 (WSE-3). Unlike graphic processing units that force customers to make trade-offs between speed and capacity, the CS-3 delivers best in class per-user performance while delivering high throughput. The massive size of the WSE-3 enables many concurrent users to benefit from blistering speed. With 7,000x more memory bandwidth than the NVIDIA H100, the WSE-3 solves Generative AI’s fundamental technical challenge: memory bandwidth. Developers can easily access the Cerebras Inference API, which is fully compatible with the OpenAI Chat Completions API, making migration seamless with just a few lines of code. Try Cerebras Inference today: www.cerebras.ai.

    Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

    The post Cerebras Launches the World’s Fastest AI Inference first appeared on AI-Tech Park.

    ]]>
    Adastra: Setting New Standards in DevOps with AWS Competency Badge https://ai-techpark.com/adastra-setting-new-standards-in-devops-with-aws-competency-badge/ Thu, 29 Aug 2024 09:15:24 +0000 https://ai-techpark.com/?p=178033 Adastra Group (also known as Adastra Corporation), a global leader in cloud, data, and artificial intelligence (AI) solutions and services, is proud to announce its achievement of the Amazon Web Services (AWS) DevOps Competency status. This designation recognizes that Adastra provides proven technology and deep expertise to help customers implement...

    The post Adastra: Setting New Standards in DevOps with AWS Competency Badge first appeared on AI-Tech Park.

    ]]>
    Adastra Group (also known as Adastra Corporation), a global leader in cloud, data, and artificial intelligence (AI) solutions and services, is proud to announce its achievement of the Amazon Web Services (AWS) DevOps Competency status. This designation recognizes that Adastra provides proven technology and deep expertise to help customers implement continuous integration and continuous delivery practices or automate infrastructure provisioning and management with configuration management tools on AWS. This accomplishment underscores Adastra’s dedication to innovation and excellence by working with AWS.

    Achieving the AWS DevOps Competency differentiates Adastra as an AWS Partner Network (APN) member that provides specialized demonstrated technical proficiency and proven customer success. To receive the designation, APN Partners must possess deep AWS expertise and deliver solutions seamlessly in AWS.

    “At Adastra, we leverage DevOps and containerization best practices to break down silos and foster collaboration between development and operations teams for organizations across a variety of sectors. Adastra’s ongoing dedication ensures increased collaboration and flexibility in application development, reflecting our commitment to pioneering solutions that align with the evolving needs of our clients. This AWS Competency underscores our capability to deliver robust, secure and innovative solutions, reinforcing our position as leaders in technology-driven transformations.” – Loan Ly, VP, AWS Partner Sales & Marketing

    At Adastra, our expertise lies in identifying your organization’s key challenges and crafting tailored solutions along with strategic implementation plans. As a longstanding AWS Partner, we stand out in the realms of cloud computing, data analytics, and machine learning. Our experienced teams adeptly incorporate AWS technologies into your existing environment, enabling businesses to unlock their maximum capabilities. Our dedication to delivering transformative outcomes solidifies our position as a reliable partner in the ever-evolving world of cloud computing.

    As an AWS DevOps Competency Partner, Adastra demonstrates proficiency in delivering DevOps solutions on AWS. We offer a suite of services and software products designed to streamline provisioning, manage infrastructure, deploy application code, automate software release processes, and incorporate security best practices into CI/CD pipelines.

    AWS’ DevOps competency assessment is structured into five phases: Initial Assessment, Analysis, Recommendations, Documentation, and Risk Assessment. This framework helps create tailored plans for various industries, emphasizing the critical role of executive sponsorship for successful DevOps transformation. Our approach meets AWS DevOps Competency standards with CI/CD pipelines that enable end-to-end automation – encompassing everything from infrastructure provisioning to all phases of application development – enhancing software release agility.

    Moreover, Adastra integrates security throughout the development lifecycle, fostering a robust DevSecOps culture. For monitoring and security, we utilize AWS services and enable multi-account activity tracking and data event redaction for enhanced security.

    As an AWS Partner, Adastra leverages best practices to deliver exceptional benefits such as speed, rapid delivery, reliability, scalability, security, and enhanced collaboration. AWS DevOps services integrate cultural philosophies, practices, and tools to accelerate service and application delivery, driving innovation and bolstering competitive market positioning. By breaking down silos between development and operations teams, AWS DevOps services promote collaboration across the application lifecycle and integrates quality assurance and security, promoting a unified DevSecOps approach.

    Moreover, automation tools enable rapid application evolution, empowering engineers to innovate autonomously. By integrating key practices such as continuous integration, continuous delivery, microservices with containerization, infrastructure as code, and proactive monitoring, organizations ensure faster, more frequent, and reliable updates for customers. Containerization, in particular, plays a crucial role in enhancing portability—a key factor underscored in our DevOps expertise. It enables applications to run consistently across diverse environments, promoting scalability and operational efficiency while maintaining deployment consistency.

    Our application of AWS services and technologies not only results in substantial cost reductions but also optimizes application development workflows. Potential clients can expect unmatched skill, broad industry knowledge, and successful project results as we persist in providing value and stimulating business growth through cutting-edge AWS DevOps services.

    Our achievement of this AWS Competency status highlights Adastra’s technical expertise and dedication to fulfilling customer needs. Securing the DevOps badge exemplifies our commitment to excellence in cloud solutions, for Adastra is also an AWS Specialization Partner. As a trusted technical AWS Partner, we demonstrate persistent dedication to excellence and customer success in steering AI innovation. This accomplishment reaffirms our pledge to enable clients to deliver top-tier cloud solutions through strategic alliances and proficiency in AWS services like AWS DevOps.

    For more information, please visit: Adastra | Data Analytics and IT Consultancy (adastracorp.com)

    Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

    The post Adastra: Setting New Standards in DevOps with AWS Competency Badge first appeared on AI-Tech Park.

    ]]>
    Untether AI’s speedAI Cards Lead MLPerf with Top Performance, Efficiency https://ai-techpark.com/untether-ais-speedai-cards-lead-mlperf-with-top-performance-efficiency/ Thu, 29 Aug 2024 09:02:50 +0000 https://ai-techpark.com/?p=178011 Verified by MLPerf Inference 4.1 benchmarks, speedAI accelerator cards exhibit industry leading performance, up to 6X the energy efficiency and 3X lower latency than other submitters Untether AI®, the leader in energy-centric AI inference acceleration, today announced its world-leading verified submission to the MLPerf® Inference benchmarks, version 4.1. The speedAI®240 Preview...

    The post Untether AI’s speedAI Cards Lead MLPerf with Top Performance, Efficiency first appeared on AI-Tech Park.

    ]]>
    Verified by MLPerf Inference 4.1 benchmarks, speedAI accelerator cards exhibit industry leading performance, up to 6X the energy efficiency and 3X lower latency than other submitters

    Untether AI®, the leader in energy-centric AI inference acceleration, today announced its world-leading verified submission to the MLPerf® Inference benchmarks, version 4.1. The speedAI®240 Preview accelerator card submission demonstrated the highest throughput of any single PCIe card in Datacenter and Edge categories for the image classification benchmark, ResNet-50[1]. The speedAI240 Slim submission exhibited greater than 3X the energy efficiency of other accelerators in the Datacenter Closed Power category [2], 6X the energy efficiency in the Edge Closed category [3], and 3X lower latency in the Edge Closed category [4].

    The MLPerf benchmarks are the only objective, peer-reviewed set of AI performance and power benchmarks in existence. MLPerf is supported by MLCommons®, a consortium of industry leading AI chip developers such as Nvidia, Google, and Intel. Benchmark submissions are audited, and peer reviewed by all the submitters for performance, accuracy, latency, and power consumption, ensuring fair and accurate reporting. It is such a high barrier to entry that few startups have attempted to submit results. Untether AI not only submitted, but demonstrated that it has the highest performance, most energy efficient, and lowest latency PCIe AI accelerator cards.

    “AI’s potential is being hamstrung by technologies that force customers to choose between performance and power efficiency. Demonstrating leadership across both vectors in a stringent test environment like MLPerf speaks volumes about the unique value of Untether AI’s At-Memory compute architecture as well as the maturity of our hardware and software,” said Chris Walker, CEO of Untether AI.

    “MLPerf submission requires operating hardware, shipments to customers, computational accuracy, and a mature software stack that can be peer reviewed. It also requires companies to declare how many accelerators are used in their submission. These factors are what makes these benchmarks so objective, but also creates a high bar that many companies can’t meet in performance, accuracy or transparency of their solution,” said Bob Beachler, VP of Product at Untether AI.

    Untether AI submitted two different cards and multiple system configurations in the Datacenter Closed, Datacenter Power, Edge Closed, and Edge Power categories for the MLPerf v4.1 ResNet-50 benchmark. In the Datacenter Closed offline performance submission of ResNet-50, it had a verified result of 70,348 samples/s[1], the highest throughput of any PCIe card submitted. In the Datacenter Closed Power, its 309,752 Server Queries/S at 986 Watts was 3X more energy efficient than the closest competitor [5].

    In the Edge submission, it had a verified result of 0.12ms for single stream latency, 0.17ms for multi stream latency, and 70,348 samples/s for offline throughput [4]. These latency values are the fastest ever recorded for an MLPerf ResNet-50 submission. In the Edge Closed Power category, Untether AI was the only company that submitted so no direct comparison is available. However, normalizing to single cards and their published TDPs, the speedAI 240 Slim card demonstrated a 6X improvement in energy efficiency [3].

    Untether AI enlisted the aid of Krai, a company that provides AI computer systems and has been trusted to submit MLPerf benchmarks by companies such as Dell, Lenovo, Qualcomm, Google, and HPE. Anton Lokhmotov, CEO of Krai, said, “We were impressed not only by the record-breaking performance and energy efficiency of the speedAI 240 accelerators, but also the readiness and robustness of the imAIgine SDK which facilitated creating the benchmark submissions.”

    Untether AI is excited to have its At-Memory technology available in shipping hardware and verified by MLPerf. The speedAI240 accelerator cards’ world-leading performance and energy efficiency will transform AI inference, making it faster and more energy efficient for markets including datacenters, video surveillance, vision guided robotics, agricultural technology, machine inspection, and autonomous vehicles. To find out more about Untether AI acceleration solutions and its recent MLPerf benchmark scores, please visit https://www.untether.ai/mlperf-results/.

    Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

    The post Untether AI’s speedAI Cards Lead MLPerf with Top Performance, Efficiency first appeared on AI-Tech Park.

    ]]>
    MLPerf v4.1 Results Showcase Fast Innovation in Generative AI Systems https://ai-techpark.com/mlperf-v4-1-results-showcase-fast-innovation-in-generative-ai-systems/ Thu, 29 Aug 2024 08:57:00 +0000 https://ai-techpark.com/?p=178007 New mixture of experts benchmark tracks emerging architectures for AI models Today, MLCommons® announced new results for its industry-standard MLPerf® Inference v4.1 benchmark suite, which delivers machine learning (ML) system performance benchmarking in an architecture-neutral, representative, and reproducible manner. This release includes first-time results for a new benchmark based on...

    The post MLPerf v4.1 Results Showcase Fast Innovation in Generative AI Systems first appeared on AI-Tech Park.

    ]]>
    New mixture of experts benchmark tracks emerging architectures for AI models

    Today, MLCommons® announced new results for its industry-standard MLPerf® Inference v4.1 benchmark suite, which delivers machine learning (ML) system performance benchmarking in an architecture-neutral, representative, and reproducible manner. This release includes first-time results for a new benchmark based on a mixture of experts (MoE) model architecture. It also presents new findings on power consumption related to inference execution.

    MLPerf Inference v4.1

    The MLPerf Inference benchmark suite, which encompasses both data center and edge systems, is designed to measure how quickly hardware systems can run AI and ML models across a variety of deployment scenarios. The open-source and peer-reviewed benchmark suite creates a level playing field for competition that drives innovation, performance, and energy efficiency for the entire industry. It also provides critical technical information for customers who are procuring and tuning AI systems.

    The benchmark results for this round demonstrate broad industry participation, and includes the debut of six newly available or soon-to-be-shipped processors:

    • AMD MI300x accelerator (available)
    • AMD EPYC “Turin” CPU (preview)
    • Google “Trillium” TPUv6e accelerator (preview)
    • Intel “Granite Rapids” Xeon CPUs (preview)
    • NVIDIA “Blackwell” B200 accelerator (preview)
    • UntetherAI SpeedAI 240 Slim (available) and SpeedAI 240 (preview) accelerators

    MLPerf Inference v4.1 includes 964 performance results from 22 submitting organizations: AMD, ASUSTek, Cisco Systems, Connect Tech Inc, CTuning Foundation, Dell Technologies, Fujitsu, Giga Computing, Google Cloud, Hewlett Packard Enterprise, Intel, Juniper Networks, KRAI, Lenovo, Neutral Magic, NVIDIA, Oracle, Quanta Cloud Technology, Red Hat, Supermicro, Sustainable Metal Cloud, and Untether AI.

    “There is now more choice than ever in AI system technologies, and it’s heartening to see providers embracing the need for open, transparent performance benchmarks to help stakeholders evaluate their technologies,” said Mitchelle Rasquinha, MLCommons Inference working group co-chair.

    New mixture of experts benchmark

    Keeping pace with today’s ever-changing AI landscape, MLPerf Inference v4.1 introduces a new benchmark to the suite: mixture of experts. MoE is an architectural design for AI models that departs from the traditional approach of employing a single, massive model; it instead uses a collection of smaller “expert” models. Inference queries are directed to a subset of the expert models to generate results. Research and industry leaders have found that this approach can yield equivalent accuracy to a single monolithic model but often at a significant performance advantage because only a fraction of the parameters are invoked with each query.

    The MoE benchmark is unique and one of the most complex implemented by MLCommons to date. It uses the open-source Mixtral 8x7B model as a reference implementation and performs inferences using datasets covering three independent tasks: general Q&A, solving math problems, and code generation.

    “When determining to add a new benchmark, the MLPerf Inference working group observed that many key players in the AI ecosystem are strongly embracing MoE as part of their strategy,” said Miro Hodak, MLCommons Inference working group co-chair. “Building an industry-standard benchmark for measuring system performance on MoE models is essential to address this trend in AI adoption. We’re proud to be the first AI benchmark suite to include MoE tests to fill this critical information gap.”

    Benchmarking Power Consumption

    The MLPerf Inference v4.1 benchmark includes 31 power consumption test results across three submitted systems covering both datacenter and edge scenarios. These results demonstrate the continued importance of understanding the power requirements for AI systems running inference tasks, as power costs are a substantial portion of the overall expense of operating AI systems.

    The Increasing Pace of AI Innovation

    Today, we are witnessing an incredible groundswell of technological advances across the AI ecosystem, driven by a wide range of providers including AI pioneers; large, well-established technology companies; and small startups.

    MLCommons would especially like to welcome first-time MLPerf Inference submitters AMD and Sustainable Metal Cloud, as well as Untether AI, which delivered both performance and power efficiency results.

    “It’s encouraging to see the breadth of technical diversity in the systems submitted to the MLPerf Inference benchmark as vendors adopt new techniques for optimizing system performance such as vLLM and sparsity-aware inference,” said David Kanter, Head of MLPerf at MLCommons. “Farther down the technology stack, we were struck by the substantial increase in unique accelerator technologies submitted to the benchmark this time. We are excited to see that systems are now evolving at a much faster pace – at every layer – to meet the needs of AI. We are delighted to be a trusted provider of open, fair, and transparent benchmarks that help stakeholders get the data they need to make sense of the fast pace of AI innovation and drive the industry forward.”

    View the Results

    To view the results for MLPerf Inference v4.1, please visit the Datacenter and Edge benchmark results pages.

    Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

    The post MLPerf v4.1 Results Showcase Fast Innovation in Generative AI Systems first appeared on AI-Tech Park.

    ]]>