artificial intelligence robot - AI-Tech Park https://ai-techpark.com AI, ML, IoT, Cybersecurity News & Trend Analysis, Interviews Fri, 30 Aug 2024 05:10:04 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.16 https://ai-techpark.com/wp-content/uploads/2017/11/cropped-ai_fav-32x32.png artificial intelligence robot - AI-Tech Park https://ai-techpark.com 32 32 Innovaccer Provider Copilot Available on Oracle Healthcare Marketplace https://ai-techpark.com/innovaccer-provider-copilot-available-on-oracle-healthcare-marketplace/ Thu, 29 Aug 2024 15:45:00 +0000 https://ai-techpark.com/?p=178114 The Innovaccer copilot will be deployed on Oracle Cloud Infrastructure to automate clinical documentation, generate potential diagnoses, and identify quality and coding gaps at the point of care Innovaccer Inc., a leading provider of healthcare AI solutions and a member of Oracle Partner Network (OPN), today announced that its Provider Copilot...

The post Innovaccer Provider Copilot Available on Oracle Healthcare Marketplace first appeared on AI-Tech Park.

]]>
The Innovaccer copilot will be deployed on Oracle Cloud Infrastructure to automate clinical documentation, generate potential diagnoses, and identify quality and coding gaps at the point of care

Innovaccer Inc., a leading provider of healthcare AI solutions and a member of Oracle Partner Network (OPN), today announced that its Provider Copilot is available on the Oracle Healthcare Marketplace and can be deployed on Oracle Cloud Infrastructure (OCI). The Oracle Healthcare Marketplace is a centralized repository of healthcare applications offered by Oracle and Oracle partners.

The deployment empowers healthcare providers to transfer AI-generated clinical notes into the patient record and address quality and coding gaps to improve care delivery. The Innovaccer Provider Copilot acts as a point-of-care assistant that helps reduce manual administrative work for healthcare providers:

  • Transcribes, analyzes, and generates clinical notes of the conversations between healthcare providers and patients.
  • Suggests potential diagnoses to consider based on the clinical notes, as well as select health data available in the Innovaccer healthcare data platform.
  • Summarizes the patient record prior to the patient visit, ensuring quick review with appropriate clinical context.
  • Flags quality and documentation gaps for providers participating in value-based care programs improving care delivery and more appropriate coding.
  • Overlays insights directly on the Cerner EHR with zero click activation.

The Innovaccer Provider Copilot improves the overall provider experience by reducing the administrative burden of clinical documentation. The solution enables healthcare providers to capture essential information during patient encounters, allowing them to prioritize their patients. Providers on the Oracle Cloud can access and benefit from the transformative power of AI at the point of care through its marketplace listing.

Oracle Healthcare Marketplace is a one-stop shop for Oracle customers seeking trusted healthcare applications. It offers unique clinical and business solutions that extend Oracle Health and Oracle Cloud Applications.

“Our Provider Copilot allows providers to spend more time with their patients by automating their clinical documentation workflows. Quality care takes time – time that shouldn’t be spent on administrative tasks. By reducing the burden of documentation and allowing patients to spend more quality time with their patients, we are helping providers rediscover the joy of care,” said Abhinav Shashank, cofounder and CEO of Innovaccer. “Innovaccer’s participation in the Oracle Healthcare Marketplace further extends our commitment to the Oracle community and enables customers to reap the benefits of the Provider Copilot in their native EHR workflow. We look forward to leveraging the power of the Oracle Cloud and Oracle Health technologies to help us achieve our business goals.”

Innovaccer’s AI-powered Provider Copilot is also available on athenahealth’s Marketplace, further validating our dedication to improving healthcare delivery through advanced technology.

To learn more about the Innovaccer Provider Copilot and its capabilities, visit the Innovaccer product listing on the Oracle Healthcare Marketplace. For details and support on implementation, please reach out to the Innovaccer support team.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post Innovaccer Provider Copilot Available on Oracle Healthcare Marketplace first appeared on AI-Tech Park.

]]>
Ontrak Health and MosaicVoice announced partnership https://ai-techpark.com/ontrak-health-and-mosaicvoice-announced-partnership/ Thu, 29 Aug 2024 14:45:00 +0000 https://ai-techpark.com/?p=178100 Ontrak, Inc. (NASDAQ: OTRK), a leading AI-powered and technology-enabled behavioral healthcare company, announced it has partnered with MosaicVoice, a pioneer in AI-powered voice technology, to transform healthcare delivery and patient outcomes for Ontrak and its members. The strategic partnership aims to create a more connected, intelligent, and patient-centric healthcare ecosystem...

The post Ontrak Health and MosaicVoice announced partnership first appeared on AI-Tech Park.

]]>
Ontrak, Inc. (NASDAQ: OTRK), a leading AI-powered and technology-enabled behavioral healthcare company, announced it has partnered with MosaicVoice, a pioneer in AI-powered voice technology, to transform healthcare delivery and patient outcomes for Ontrak and its members. The strategic partnership aims to create a more connected, intelligent, and patient-centric healthcare ecosystem by integrating advanced voice and AI technologies.

“By partnering with MosaicVoice, we are combining the best in AI-driven engagement with our expertise in healthcare to deliver more effective and efficient care,” said Brianna Brennan, Chief Innovation Officer at Ontrak Health. “This collaboration promotes a scalable and elevated patient experience that is intended to improve health outcomes. Incorporating this technology into our ecosystem further enables consistent delivery of our evidence-based model built upon the Comprehensive Healthcare Integration (CHI) framework.”

MosaicVoice’s AI technology offers real-time, dynamic guidance and conversation analysis, helping care teams maintain meaningful and compliant patient interactions. The solution actively listens to conversations, ensures adherence to care delivery protocols, and guides care teams with prompts that enhance patient engagement. This technology can detect patient sentiment, surface care opportunities, and provide immediate feedback to support care providers.

The partnership will leverage MosaicVoice’s advanced features, including:

  • Real-Time, Dynamic AI Guidance: Ensures all patient interactions are compliant and on-message while allowing personalized rapport building.
  • Post-Call Quality Assurance Automation: Uses AI to score 100% of interactions, identify care drivers, and automate call summaries, allowing care teams to focus on critical patient needs.
  • Performance Insights and Reporting: Offers customizable reporting to track engagement metrics, identify trends, and optimize care delivery based on real-time data.

“Combining our AI-driven voice solutions with Ontrak Health’s comprehensive behavioral health program platform will set a new standard for patient engagement,” said Julian McCarty, CEO of MosaicVoice. “Together, we’re driving a more proactive, responsive, and efficient approach to healthcare.”

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post Ontrak Health and MosaicVoice announced partnership first appeared on AI-Tech Park.

]]>
FOMO Drives AI Adoption in 60% of Businesses, but Raises Trust Issues https://ai-techpark.com/fomo-drives-ai-adoption-in-60-of-businesses-but-raises-trust-issues/ Thu, 29 Aug 2024 14:15:00 +0000 https://ai-techpark.com/?p=178093 Fear of Missing Out (FOMO) a key driver for AI uptake – even as trust in AI is high Trust in AI is highest in the US at 87%, while France lags at 77% Purpose-built AI considered the most trustworthy type of AI at 90% A new survey from intelligent...

The post FOMO Drives AI Adoption in 60% of Businesses, but Raises Trust Issues first appeared on AI-Tech Park.

]]>
  • Fear of Missing Out (FOMO) a key driver for AI uptake – even as trust in AI is high
  • Trust in AI is highest in the US at 87%, while France lags at 77%
  • Purpose-built AI considered the most trustworthy type of AI at 90%
  • A new survey from intelligent automation company ABBYY finds that fear of missing out (FOMO) plays a big factor in artificial intelligence (AI) investment, with 63% of global IT leaders reporting they are worried their company will be left behind if they don’t use it.

    With fears of being left behind so prevalent, it is no surprise that IT decision makers from the US, UK, France, Germany, Singapore, and Australia reported that average investment in AI exceeded $879,000 in the last year despite a third (33%) of business leaders having concerns about implementation costs. Almost all (96%) respondents in the ABBYY State of Intelligent Automation Report: AI Trust Barometer said they also plan to increase investment in AI in the next year, although Gartner predicts that by 2025, growth in 90% of enterprise deployments of GenAI will slow as costs exceed value.

    Furthermore, over half (55%) of business leaders admitted that another key driver for use of AI was pressure from customers.

    Surprisingly, the survey revealed another fear for IT leaders implementing AI was misuse by their own staff (35%). This came ahead of concerns about costs (33%), AI hallucinations and lack of expertise (both 32%), and even compliance risk (29%).

    Overall, respondents reported an overwhelmingly high level of trust in AI tools (84%). The most trustworthy according to decision makers were small language models (SLMs) or purpose-built AI (90%). More than half (54%) said they were already using purpose-built AI tools, such as intelligent document processing (IDP).

    Maxime Vermeir, Senior Director of AI Strategy at ABBYY, commented, “It’s no surprise to me that organizations have more trust in small language models due to the tendency of LLMs to hallucinate and provide inaccurate and possibly harmful outcomes. We’re seeing more business leaders moving to SLMs to better address their specific business needs, enabling more trustworthy results.”

    When asked about trust and ethical use of AI, an overwhelming majority (91%) of respondents are confident their company is following all government regulations. Yet only 56% say they have their own trustworthy AI policies, while 43% are seeking guidance from a consultant or non-profit. Half (50%) said they would feel more confident knowing their company had a responsible AI policy, while having software tools that can detect and monitor AI compliance was also cited as a confidence booster (48%).

    On a regional basis, levels of trust were highest among US respondents, with 87% saying they trust AI; Singapore came next at 86% followed by the UK and Australia, both 85%, then Germany at 83%. Lagging was France, with just 77% of respondents indicating they trust AI.

    The ABBYY State of Intelligent Automation Report gauged the level of trust and adoption of AI technologies across 1,200 IT decision makers in the UK, US, France, Germany, Australia and Singapore. The study was carried out June 3-12, 2024. Download the full report for additional details at https://digital.abbyy.com/state-of-intelligent-automation-ai-trust-barometer-2024-report-download.

    The results of the AI Trust Barometer survey and other topics about the impact of AI-powered automation will be discussed during Intelligent Automation Month; register today at https://www.abbyy.com/intelligent-automation-month/.

    Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

    The post FOMO Drives AI Adoption in 60% of Businesses, but Raises Trust Issues first appeared on AI-Tech Park.

    ]]>
    Adastra Awarded Major Recognition by AWS for Generative AI Excellence https://ai-techpark.com/adastra-awarded-major-recognition-by-aws-for-generative-ai-excellence/ Thu, 29 Aug 2024 14:00:00 +0000 https://ai-techpark.com/?p=178086 Adastra Group (also known as Adastra Corporation), a global leader in cloud, data, and artificial intelligence (AI) solutions and services, is proud to announce its achievement of a generative AI AWS competency badge. This accomplishment highlights Adastra’s commitment to innovation and excellence through our relationship with AWS. “Achieving the AWS...

    The post Adastra Awarded Major Recognition by AWS for Generative AI Excellence first appeared on AI-Tech Park.

    ]]>
    Adastra Group (also known as Adastra Corporation), a global leader in cloud, data, and artificial intelligence (AI) solutions and services, is proud to announce its achievement of a generative AI AWS competency badge. This accomplishment highlights Adastra’s commitment to innovation and excellence through our relationship with AWS.

    “Achieving the AWS generative AI competency badge is a landmark achievement for Adastra, underscoring our dedication to leveraging artificial intelligence to unlock business value for organizations. AI adoption is surging, with 72% (McKinsey, 2024)of enterprises already integrating AI and 65% regularly employing generative AI (McKinsey, 2024). As an AWS generative AI competency partner, we are proud to help organizations identify and implement high-value GenAI use cases. The fact that 3 out of 4 projects we deploy progress to production-grade solutions, where they deliver tangible business impact, is a testament to the value we bring to our clients through our relationship with AWS.” – Ondřej Vaněk, Chief AI Officer at Adastra.

    At Adastra, we specialize in assessing your organization’s current state and challenges to create tailored solutions and roadmaps for implementation. As a longstanding member of the AWS Partner Network (APN) with four separate AWS competencies and a position as an Advanced Services Partner, we excel in cloud computing, data analytics, and machine learning. Our skilled teams seamlessly integrate AWS technologies into your organization’s existing environment, empowering businesses to harness their full potential. Our commitment to transformative results establishes us a trusted partner in the dynamic landscape of cloud computing.

    Adastra’s thorough application process for the generative AI competency badge involved the development of a comprehensive generative AI strategy and governance methodology. This approach enables us to assess client readiness for generative AI, identify use cases, and evaluate the business value of generative AI projects. AWS recognized this meticulous assessment methodology and rewarded our efforts with this competency badge.

    With this achievement, Adastra now proudly stands as an AWS generative AI competency partner. This positions Adastra as a leader in driving innovation within generative AI technologies. With this status, Adastra will continue to set standards in developing and launching transformative applications across various industries, leveraging innovative AWS technologies to craft groundbreaking solutions, enhance productivity, deliver unique experiences, and accelerate innovation.

    Our use of AWS services and technologies drives significant cost savings and streamlines application development processes for organizations across diverse industries. Prospective clients can anticipate AWS-certified expertise, extensive industry experience, and successful project outcomes as we continue to deliver value and foster business growth through cutting-edge AWS generative AI solutions.

    Amazon Bedrock, a revolutionary generative AI solution, offers a spectrum of cutting-edge LLMs for easy development of GenAI applications with a focus on security, privacy, and responsible AI usage. It enables easy experimentation, customization, and task execution within enterprise systems and data sources, seamlessly integrating with familiar AWS services for deployment. Amazon Bedrock offers heightened security, complete data control, encryption features, and identity-based policies, among other benefits.

    Other generative AI solutions like Amazon Q and Amazon SageMaker Jumpstart offer additional capabilities for faster innovation and increased business value. Amazon Q serves as a generative AI–powered assistant for software development and can also be used as a fully managed chatbot. Meanwhile SageMaker Jumpstart enables building, training, and deploying machine learning (ML) models for a variety of use cases. Amazon QuickSight provides scalable business intelligence, enhancing productivity with generative AI capabilities like executive summaries and interactive data stories.

    Adastra remains committed to upholding rigid ethical standards and promoting responsible AI usage. As such, we actively adhere to an ethical generative AI policy and prioritize ethical use in all generative AI solutions and endeavors.

    Our achievement of this competency badge on the AWS platform underscores Adastra’s technical proficiency and commitment to meeting customer needs. Achieving the generative AI badge highlights our dedication to cloud solution excellence, granting Adastra AWS Specialization Partner status. As a trusted technical partner, we demonstrate unwavering commitment to excellence and customer success in navigating AI innovation. This achievement reaffirms our commitment to empowering clients and delivering high-quality cloud solutions through expertise in AWS services like generative AI.

    Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

    The post Adastra Awarded Major Recognition by AWS for Generative AI Excellence first appeared on AI-Tech Park.

    ]]>
    Palantir Named a Leader in AI/ML Platforms https://ai-techpark.com/palantir-named-a-leader-in-ai-ml-platforms/ Thu, 29 Aug 2024 13:30:00 +0000 https://ai-techpark.com/?p=178083 Palantir Technologies Inc. (NYSE: PLTR), a leading provider of AI software platforms, today announced it had been recognized as a Leader in artificial intelligence and machine learning (AI/ML) software platforms by renowned research and advisory firm Forrester. Palantir was among the select companies that Forrester invited to participate in “The...

    The post Palantir Named a Leader in AI/ML Platforms first appeared on AI-Tech Park.

    ]]>
    Palantir Technologies Inc. (NYSE: PLTR), a leading provider of AI software platforms, today announced it had been recognized as a Leader in artificial intelligence and machine learning (AI/ML) software platforms by renowned research and advisory firm Forrester.

    Palantir was among the select companies that Forrester invited to participate in “The Forrester Wave™: AI/ML Platforms, Q3 2024” report. Palantir was cited as a Leader in this research, with the highest ranking for Current Offering.

    As stated in the report: “Palantir has one of the strongest offerings in the AI/ML space with a vision and roadmap to create a platform that brings together humans and machines in a joint decision-making model. Its approach is to use its data pipelining capabilities and differentiated ontology to support the basis of its AI platform (AIP) offering… Palantir is quietly becoming one of the largest players in this market, seeing a consistent sustained growth rate in the past half decade by making its platform more accessible to users, investing in customer success, and embracing the support of multirole AI teams.”

    “Palantir AIP powers the most demanding use-cases across the public and private sector, and is uniquely designed to connect AI directly into frontline operations,” said Akshay Krishnaswamy, Palantir’s Chief Architect. “We believe that being named a Leader in this Forrester Wave evaluation validates our investments across model-agnostic Generative AI infrastructure, multimodal guardrails for human-AI teaming, the decision-centric Ontology — and the full range of other capabilities needed to take enterprises from AI prototype to production.”

    Palantir AIP provides the end-to-end architecture for enabling real-time, AI-driven decision-making. Together with Palantir Foundry and Palantir Apollo, AIP enables the “AI Mesh” architecture that is setting the standard for enterprises seeking to deliver composable, interoperable, and scalable value through AI. From public health to battery production, organizations depend on Palantir to safely, securely, and effectively leverage AI in their enterprises — and drive operational results.

    Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

    The post Palantir Named a Leader in AI/ML Platforms first appeared on AI-Tech Park.

    ]]>
    Cerebras Launches the World’s Fastest AI Inference https://ai-techpark.com/cerebras-launches-the-worlds-fastest-ai-inference/ Thu, 29 Aug 2024 09:27:40 +0000 https://ai-techpark.com/?p=178036 20X performance and 1/5th the price of GPUs- available today Developers can now leverage the power of wafer-scale compute for AI inference via a simple API Today, Cerebras Systems, the pioneer in high performance AI compute, announced Cerebras Inference, the fastest AI inference solution in the world. Delivering 1,800 tokens per second...

    The post Cerebras Launches the World’s Fastest AI Inference first appeared on AI-Tech Park.

    ]]>
    20X performance and 1/5th the price of GPUs- available today

    Developers can now leverage the power of wafer-scale compute for AI inference via a simple API

    Today, Cerebras Systems, the pioneer in high performance AI compute, announced Cerebras Inference, the fastest AI inference solution in the world. Delivering 1,800 tokens per second for Llama 3.1 8B and 450 tokens per second for Llama 3.1 70B, Cerebras Inference is 20 times faster than NVIDIA GPU-based solutions in hyperscale clouds. Starting at just 10c per million tokens, Cerebras Inference is priced at a fraction of GPU solutions, providing 100x higher price-performance for AI workloads.

    Unlike alternative approaches that compromise accuracy for performance, Cerebras offers the fastest performance while maintaining state-of-the-art accuracy by staying in the 16-bit domain for the entire inference run. Cerebras Inference is priced at a fraction of GPU-based competitors, with pay-as-you-go pricing of 10 cents per million tokens for Llama 3.1 8B and 60 cents per million tokens for Llama 3.1 70B.

    “Cerebras has taken the lead in Artificial Analysis’ AI inference benchmarks. Cerebras is delivering speeds an order of magnitude faster than GPU-based solutions for Meta’s Llama 3.1 8B and 70B AI models. We are measuring speeds above 1,800 output tokens per second on Llama 3.1 8B, and above 446 output tokens per second on Llama 3.1 70B – a new record in these benchmarks,” said Micah Hill-Smith, Co-Founder and CEO of Artificial Analysis.

    “Artificial Analysis has verified that Llama 3.1 8B and 70B on Cerebras Inference achieve quality evaluation results in line with native 16-bit precision per Meta’s official versions. With speeds that push the performance frontier and competitive pricing, Cerebras Inference is particularly compelling for developers of AI applications with real-time or high volume requirements,” Hill-Smith concluded.

    Inference is the fastest growing segment of AI compute and constitutes approximately 40% of the total AI hardware market. The advent of high-speed AI inference, exceeding 1,000 tokens per second, is comparable to the introduction of broadband internet, unleashing vast new opportunities and heralding a new era for AI applications. Cerebras’ 16-bit accuracy and 20x faster inference calls empowers developers to build next-generation AI applications that require complex, multi-step, real-time performance of tasks, such as AI agents.

    “DeepLearning.AI has multiple agentic workflows that require prompting an LLM repeatedly to get a result. Cerebras has built an impressively fast inference capability which will be very helpful to such workloads,” said Dr. Andrew Ng, Founder of DeepLearning.AI.

    AI leaders in large companies and startups alike agree that faster is better:

    “Speed and scale change everything,” said Kim Branson, SVP of AI/ML at GlaxoSmithKline, an early Cerebras customer.

    “LiveKit is excited to partner with Cerebras to help developers build the next generation of multimodal AI applications. Combining Cerebras’ best-in-class compute and SoTA models with LiveKit’s global edge network, developers can now create voice and video-based AI experiences with ultra-low latency and more human-like characteristics,” said Russell D’sa, CEO and Co-Founder of LiveKit.

    “For traditional search engines, we know that lower latencies drive higher user engagement and that instant results have changed the way people interact with search and with the internet. At Perplexity, we believe ultra-fast inference speeds like what Cerebras is demonstrating can have a similar unlock for user interaction with the future of search – intelligent answer engines,” said Denis Yarats, CTO and co-founder, Perplexity.

    “With infrastructure, speed is paramount. The performance of Cerebras Inference supercharges Meter Command to generate custom software and take action, all at the speed and ease of searching on the web. This level of responsiveness helps our customers get the information they need, exactly when they need it in order to keep their teams online and productive,” said Anil Varanasi, CEO of Meter.

    Cerebras has made its inference service available across three competitively priced tiers: Free, Developer, and Enterprise.

    • The Free Tier offers free API access and generous usage limits to anyone who logs in.
    • The Developer Tier, designed for flexible, serverless deployment, provides users with an API endpoint at a fraction of the cost of alternatives in the market, with Llama 3.1 8B and 70B models priced at 10 cents and 60 cents per million tokens, respectively. Looking ahead, Cerebras will be continuously rolling out support for many more models.
    • The Enterprise Tier offers fine-tuned models, custom service level agreements, and dedicated support. Ideal for sustained workloads, enterprises can access Cerebras Inference via a Cerebras-managed private cloud or on customer premise. Pricing for enterprises is available upon request.

    Strategic Partnerships to Accelerate AI Development Building AI applications requires a range of specialized tools at each stage, from open-source model giants to frameworks like LangChain and LlamaIndex that enable rapid development. Others like Docker, which ensures consistent containerization and deployment of AI-powered applications, and MLOps tools like Weights & Biases that maintain operational efficiency. At the forefront of innovation, companies like Meter are revolutionizing AI-powered network management, while learning platforms like DeepLearning.AI are equipping the next generation of developers with critical skills. Cerebras is proud to collaborate with these industry leaders, including Docker, Nasdaq, LangChain, LlamaIndex, Weights & Biases, Weaviate, AgentOps, and Log10 to drive the future of AI forward.

    Cerebras Inference is powered by the Cerebras CS-3 system and its industry-leading AI processor – the Wafer Scale Engine 3 (WSE-3). Unlike graphic processing units that force customers to make trade-offs between speed and capacity, the CS-3 delivers best in class per-user performance while delivering high throughput. The massive size of the WSE-3 enables many concurrent users to benefit from blistering speed. With 7,000x more memory bandwidth than the NVIDIA H100, the WSE-3 solves Generative AI’s fundamental technical challenge: memory bandwidth. Developers can easily access the Cerebras Inference API, which is fully compatible with the OpenAI Chat Completions API, making migration seamless with just a few lines of code. Try Cerebras Inference today: www.cerebras.ai.

    Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

    The post Cerebras Launches the World’s Fastest AI Inference first appeared on AI-Tech Park.

    ]]>
    Adastra: Setting New Standards in DevOps with AWS Competency Badge https://ai-techpark.com/adastra-setting-new-standards-in-devops-with-aws-competency-badge/ Thu, 29 Aug 2024 09:15:24 +0000 https://ai-techpark.com/?p=178033 Adastra Group (also known as Adastra Corporation), a global leader in cloud, data, and artificial intelligence (AI) solutions and services, is proud to announce its achievement of the Amazon Web Services (AWS) DevOps Competency status. This designation recognizes that Adastra provides proven technology and deep expertise to help customers implement...

    The post Adastra: Setting New Standards in DevOps with AWS Competency Badge first appeared on AI-Tech Park.

    ]]>
    Adastra Group (also known as Adastra Corporation), a global leader in cloud, data, and artificial intelligence (AI) solutions and services, is proud to announce its achievement of the Amazon Web Services (AWS) DevOps Competency status. This designation recognizes that Adastra provides proven technology and deep expertise to help customers implement continuous integration and continuous delivery practices or automate infrastructure provisioning and management with configuration management tools on AWS. This accomplishment underscores Adastra’s dedication to innovation and excellence by working with AWS.

    Achieving the AWS DevOps Competency differentiates Adastra as an AWS Partner Network (APN) member that provides specialized demonstrated technical proficiency and proven customer success. To receive the designation, APN Partners must possess deep AWS expertise and deliver solutions seamlessly in AWS.

    “At Adastra, we leverage DevOps and containerization best practices to break down silos and foster collaboration between development and operations teams for organizations across a variety of sectors. Adastra’s ongoing dedication ensures increased collaboration and flexibility in application development, reflecting our commitment to pioneering solutions that align with the evolving needs of our clients. This AWS Competency underscores our capability to deliver robust, secure and innovative solutions, reinforcing our position as leaders in technology-driven transformations.” – Loan Ly, VP, AWS Partner Sales & Marketing

    At Adastra, our expertise lies in identifying your organization’s key challenges and crafting tailored solutions along with strategic implementation plans. As a longstanding AWS Partner, we stand out in the realms of cloud computing, data analytics, and machine learning. Our experienced teams adeptly incorporate AWS technologies into your existing environment, enabling businesses to unlock their maximum capabilities. Our dedication to delivering transformative outcomes solidifies our position as a reliable partner in the ever-evolving world of cloud computing.

    As an AWS DevOps Competency Partner, Adastra demonstrates proficiency in delivering DevOps solutions on AWS. We offer a suite of services and software products designed to streamline provisioning, manage infrastructure, deploy application code, automate software release processes, and incorporate security best practices into CI/CD pipelines.

    AWS’ DevOps competency assessment is structured into five phases: Initial Assessment, Analysis, Recommendations, Documentation, and Risk Assessment. This framework helps create tailored plans for various industries, emphasizing the critical role of executive sponsorship for successful DevOps transformation. Our approach meets AWS DevOps Competency standards with CI/CD pipelines that enable end-to-end automation – encompassing everything from infrastructure provisioning to all phases of application development – enhancing software release agility.

    Moreover, Adastra integrates security throughout the development lifecycle, fostering a robust DevSecOps culture. For monitoring and security, we utilize AWS services and enable multi-account activity tracking and data event redaction for enhanced security.

    As an AWS Partner, Adastra leverages best practices to deliver exceptional benefits such as speed, rapid delivery, reliability, scalability, security, and enhanced collaboration. AWS DevOps services integrate cultural philosophies, practices, and tools to accelerate service and application delivery, driving innovation and bolstering competitive market positioning. By breaking down silos between development and operations teams, AWS DevOps services promote collaboration across the application lifecycle and integrates quality assurance and security, promoting a unified DevSecOps approach.

    Moreover, automation tools enable rapid application evolution, empowering engineers to innovate autonomously. By integrating key practices such as continuous integration, continuous delivery, microservices with containerization, infrastructure as code, and proactive monitoring, organizations ensure faster, more frequent, and reliable updates for customers. Containerization, in particular, plays a crucial role in enhancing portability—a key factor underscored in our DevOps expertise. It enables applications to run consistently across diverse environments, promoting scalability and operational efficiency while maintaining deployment consistency.

    Our application of AWS services and technologies not only results in substantial cost reductions but also optimizes application development workflows. Potential clients can expect unmatched skill, broad industry knowledge, and successful project results as we persist in providing value and stimulating business growth through cutting-edge AWS DevOps services.

    Our achievement of this AWS Competency status highlights Adastra’s technical expertise and dedication to fulfilling customer needs. Securing the DevOps badge exemplifies our commitment to excellence in cloud solutions, for Adastra is also an AWS Specialization Partner. As a trusted technical AWS Partner, we demonstrate persistent dedication to excellence and customer success in steering AI innovation. This accomplishment reaffirms our pledge to enable clients to deliver top-tier cloud solutions through strategic alliances and proficiency in AWS services like AWS DevOps.

    For more information, please visit: Adastra | Data Analytics and IT Consultancy (adastracorp.com)

    Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

    The post Adastra: Setting New Standards in DevOps with AWS Competency Badge first appeared on AI-Tech Park.

    ]]>
    Untether AI’s speedAI Cards Lead MLPerf with Top Performance, Efficiency https://ai-techpark.com/untether-ais-speedai-cards-lead-mlperf-with-top-performance-efficiency/ Thu, 29 Aug 2024 09:02:50 +0000 https://ai-techpark.com/?p=178011 Verified by MLPerf Inference 4.1 benchmarks, speedAI accelerator cards exhibit industry leading performance, up to 6X the energy efficiency and 3X lower latency than other submitters Untether AI®, the leader in energy-centric AI inference acceleration, today announced its world-leading verified submission to the MLPerf® Inference benchmarks, version 4.1. The speedAI®240 Preview...

    The post Untether AI’s speedAI Cards Lead MLPerf with Top Performance, Efficiency first appeared on AI-Tech Park.

    ]]>
    Verified by MLPerf Inference 4.1 benchmarks, speedAI accelerator cards exhibit industry leading performance, up to 6X the energy efficiency and 3X lower latency than other submitters

    Untether AI®, the leader in energy-centric AI inference acceleration, today announced its world-leading verified submission to the MLPerf® Inference benchmarks, version 4.1. The speedAI®240 Preview accelerator card submission demonstrated the highest throughput of any single PCIe card in Datacenter and Edge categories for the image classification benchmark, ResNet-50[1]. The speedAI240 Slim submission exhibited greater than 3X the energy efficiency of other accelerators in the Datacenter Closed Power category [2], 6X the energy efficiency in the Edge Closed category [3], and 3X lower latency in the Edge Closed category [4].

    The MLPerf benchmarks are the only objective, peer-reviewed set of AI performance and power benchmarks in existence. MLPerf is supported by MLCommons®, a consortium of industry leading AI chip developers such as Nvidia, Google, and Intel. Benchmark submissions are audited, and peer reviewed by all the submitters for performance, accuracy, latency, and power consumption, ensuring fair and accurate reporting. It is such a high barrier to entry that few startups have attempted to submit results. Untether AI not only submitted, but demonstrated that it has the highest performance, most energy efficient, and lowest latency PCIe AI accelerator cards.

    “AI’s potential is being hamstrung by technologies that force customers to choose between performance and power efficiency. Demonstrating leadership across both vectors in a stringent test environment like MLPerf speaks volumes about the unique value of Untether AI’s At-Memory compute architecture as well as the maturity of our hardware and software,” said Chris Walker, CEO of Untether AI.

    “MLPerf submission requires operating hardware, shipments to customers, computational accuracy, and a mature software stack that can be peer reviewed. It also requires companies to declare how many accelerators are used in their submission. These factors are what makes these benchmarks so objective, but also creates a high bar that many companies can’t meet in performance, accuracy or transparency of their solution,” said Bob Beachler, VP of Product at Untether AI.

    Untether AI submitted two different cards and multiple system configurations in the Datacenter Closed, Datacenter Power, Edge Closed, and Edge Power categories for the MLPerf v4.1 ResNet-50 benchmark. In the Datacenter Closed offline performance submission of ResNet-50, it had a verified result of 70,348 samples/s[1], the highest throughput of any PCIe card submitted. In the Datacenter Closed Power, its 309,752 Server Queries/S at 986 Watts was 3X more energy efficient than the closest competitor [5].

    In the Edge submission, it had a verified result of 0.12ms for single stream latency, 0.17ms for multi stream latency, and 70,348 samples/s for offline throughput [4]. These latency values are the fastest ever recorded for an MLPerf ResNet-50 submission. In the Edge Closed Power category, Untether AI was the only company that submitted so no direct comparison is available. However, normalizing to single cards and their published TDPs, the speedAI 240 Slim card demonstrated a 6X improvement in energy efficiency [3].

    Untether AI enlisted the aid of Krai, a company that provides AI computer systems and has been trusted to submit MLPerf benchmarks by companies such as Dell, Lenovo, Qualcomm, Google, and HPE. Anton Lokhmotov, CEO of Krai, said, “We were impressed not only by the record-breaking performance and energy efficiency of the speedAI 240 accelerators, but also the readiness and robustness of the imAIgine SDK which facilitated creating the benchmark submissions.”

    Untether AI is excited to have its At-Memory technology available in shipping hardware and verified by MLPerf. The speedAI240 accelerator cards’ world-leading performance and energy efficiency will transform AI inference, making it faster and more energy efficient for markets including datacenters, video surveillance, vision guided robotics, agricultural technology, machine inspection, and autonomous vehicles. To find out more about Untether AI acceleration solutions and its recent MLPerf benchmark scores, please visit https://www.untether.ai/mlperf-results/.

    Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

    The post Untether AI’s speedAI Cards Lead MLPerf with Top Performance, Efficiency first appeared on AI-Tech Park.

    ]]>
    Copado Announces Key Executive Appointments https://ai-techpark.com/copado-announces-key-executive-appointments/ Wed, 28 Aug 2024 16:30:00 +0000 https://ai-techpark.com/?p=177948 Copado, the leader in DevOps for business applications, today announced the appointment of several new executives to the leadership team. The strategic hires are poised to drive Copado’s internal and external AI initiatives, further solidifying the company’s position in the industry. Joining the Copado team are: Aishling Finnegan as Vice President...

    The post Copado Announces Key Executive Appointments first appeared on AI-Tech Park.

    ]]>
    Copado, the leader in DevOps for business applications, today announced the appointment of several new executives to the leadership team. The strategic hires are poised to drive Copado’s internal and external AI initiatives, further solidifying the company’s position in the industry.

    Joining the Copado team are:

    • Aishling Finnegan as Vice President of Marketing and Digital Transformation. Recognized as one of the most influential women in UK tech, Finnegan is responsible for developing and executing go-to-market strategies that will accelerate awareness, sales, adoption and market penetration for Copado’s AI-powered DevOps platform. Previously, she was Vice President of Product Strategy, GTM and Transformation at Conga where she leveraged AI to drive business innovation and efficiency. Her work has been instrumental in helping organizations understand the critical role of AI in transforming processes and enhancing decision-making capabilities. Ash’s contributions have earned her several prestigious accolades, including the Stevie® Award for Female Executive of the Year and a spot in the Top 10 Women in IT Summit & Awards Series 2023.
    • Steve Simpson as Vice President of Global Enablement and Learning. Simpson will support and grow partner enablement while enhancing and expanding Copado training and learning, including materials to increase AI adoption across the Copado ecosystem, including more than 95,000 community members. A Certified Salesforce Technical Architect, he holds the highest level of certification in the Salesforce ecosystem and is an expert on the Salesforce platform. Simpson has been a part of the Salesforce ecosystem for nearly 20 years and has previously served as a Trailhead Architect Instructor.
    • Ed Salay as Senior Vice President and Corporate Controller. Salay will lead the accounting organization, building the team, processes, and systems to help Copado scale. He has a proven track record for driving growth in B2B SaaS companies, including C-level roles at Sail Internet, VoiceBase and Jobscience.

    “We welcome Aishling, Steve and Ed to the Copado team,” said Ted Elliott, CEO of Copado. “Their collective expertise and leadership will be instrumental as we continue to drive innovation and expand our AI initiatives. These strategic hires reflect our commitment to deeply integrating AI into its product roadmap and operations, ensuring the company remains at the forefront of the DevOps industry.”

    Copado was named to the shortlist for “Best AI-driven Automation Solution” by the A.I. Awards and for “Best AI-enabled SaaS Solution” by the SaaS Awards. The company was also a finalist for “Best SaaS Product for Web or App Development” in the SaaS Awards.

    Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

    The post Copado Announces Key Executive Appointments first appeared on AI-Tech Park.

    ]]>
    NVIDIA and Global Partners Launch NIM Agent Blueprints https://ai-techpark.com/nvidia-and-global-partners-launch-nim-agent-blueprints/ Wed, 28 Aug 2024 08:35:42 +0000 https://ai-techpark.com/?p=177867 Catalog of Customizable Workflows Speeds Deployments of Core Generative AI Use Cases, Starting With Customer Service, Drug Discovery and Data Extraction for PDFs, With More to Come Companies Can Build and Operationalize Their AI Applications — Creating Data-Driven AI Flywheels — Using NIM Agent Blueprints Along With NIM Microservices and...

    The post NVIDIA and Global Partners Launch NIM Agent Blueprints first appeared on AI-Tech Park.

    ]]>
  • Catalog of Customizable Workflows Speeds Deployments of Core Generative AI Use Cases, Starting With Customer Service, Drug Discovery and Data Extraction for PDFs, With More to Come
  • Companies Can Build and Operationalize Their AI Applications — Creating Data-Driven AI Flywheels — Using NIM Agent Blueprints Along With NIM Microservices and NeMo Framework, All Part of the NVIDIA AI Enterprise Platform
  • Accenture, Cisco, Dell Technologies, Deloitte, Hewlett Packard Enterprise, Lenovo, SoftServe, World Wide Technology Among First Partners Delivering NIM Agent Blueprints to World’s Enterprises
  • NVIDIA today announced NVIDIA NIM™ Agent Blueprints, a catalog of pretrained, customizable AI workflows that equip millions of enterprise developers with a full suite of software for building and deploying generative AI applications for canonical use cases, such as customer service avatars, retrieval-augmented generation and drug discovery virtual screening.

    NIM Agent Blueprints provide a jump-start for developers creating AI applications that use one or more AI agents. They include sample applications built with NVIDIA NeMo™NVIDIA NIM and partner microservices, reference code, customization documentation and a Helm chart for deployment.

    Enterprises can modify NIM Agent Blueprints using their business data and run their generative AI applications across accelerated data centers and clouds. With NIM Agent Blueprints, enterprises can continually refine their AI applications based on user feedback, creating a data-driven AI flywheel.

    The first NIM Agent Blueprints now available include a digital human workflow for customer service, a generative virtual screening workflow for computer-aided drug discovery and a multimodal PDF data extraction workflow for enterprise retrieval-augmented generation (RAG) that can use vast quantities of business data for more accurate responses. NIM Agent Blueprints are free for developers to experience and download and can be deployed in production with the NVIDIA AI Enterprise software platform.

    Global system integrators and technology solutions providers Accenture, DeloitteSoftServe and World Wide Technology (WWT) are bringing NVIDIA NIM Agent Blueprints to enterprises worldwide. CiscoDell TechnologiesHewlett Packard Enterprise and Lenovo are offering full-stack NVIDIA-accelerated infrastructure and solutions to speed NIM Agent Blueprints deployments.

    “Generative AI is advancing at lightspeed. Frontier model capabilities are growing exponentially with a continuous stream of new applications,” said Jensen Huang, founder and CEO of NVIDIA. “The enterprise AI wave is here. With the NVIDIA AI Enterprise toolkit — including NeMo, NIM microservices and the latest NIM Agent Blueprints — our expansive partner ecosystem is poised to help enterprises customize open-source models, build bespoke AI applications and deploy them seamlessly across any cloud, on premises or at the edge.”

    Digital Human NIM Agent Blueprint Advances Customer Service
    Gartner® reports that 80% of conversational offerings will embed generative AI by 2025, up from 20% in 2024(1). The digital human NIM Agent Blueprint for customer service helps enterprises rapidly prepare for this coming shift, bringing enterprise applications to life with a 3D animated avatar interface.

    With approachable, humanlike interactions, customer service applications can provide more engaging user experiences compared to traditional customer service options. Powered by NVIDIA Tokkio technologies, the digital human workflow features NVIDIA software including NVIDIA ACENVIDIA Omniverse RTX™NVIDIA Audio2Face™ and Llama 3.1 NIM microservices, and is designed to integrate with existing enterprise generative AI applications built using RAG.

    Multimodal PDF Data Extraction NIM Agent Blueprint Taps Business Data
    The multimodal PDF data extraction workflow for enterprise RAG uses NVIDIA NeMo Retriever NIM microservices to unlock insights from massive volumes of enterprise PDF data. With this workflow, developers can create digital humans, AI agents or customer service chatbots that can quickly become experts on any topic captured within their corpus of PDF data.

    Using the workflow, enterprises can combine NeMo Retriever NIM microservices with community or custom models to build high-accuracy, multimodal retrieval pipelines that can be deployed wherever enterprise data resides.

    Generative Virtual Screening NIM Agent Blueprint Accelerates Drug Discovery
    The generative virtual screening NVIDIA NIM Agent Blueprint for drug discovery accelerates the identification and optimization of promising drug-like molecules, significantly reducing time and cost by generating molecules with favorable properties and higher probabilities of success.

    Researchers and application developers can quickly customize and deploy AI models for 3D protein structure prediction, small molecule generation and molecular docking. This blueprint incorporates NVIDIA NIM microservices — including AlphaFold2, MolMIM and DiffDock — to accelerate the virtual screening of small molecules using generative models.

    In combination with other tools available in NVIDIA BioNeMo™, enterprises can easily connect multiple NIM Agent Blueprints to build increasingly sophisticated AI applications and accelerate their drug discovery work.

    Additional blueprints will be released monthly for workflows to build generative AI applications for customer experience, content generation, software engineering, and product research and development.

    NVIDIA Partner Ecosystem Amplifies Enterprise Generative AI Success
    NVIDIA partners are readying to help the world’s enterprises rapidly build and deploy their own generative AI applications using NIM Agent Blueprints.

    Global professional services company Accenture will add NVIDIA NIM Agent Blueprints to its Accenture AI Refinery™, unveiled last month.

    “Across industries, generative AI is acting as a catalyst for companies looking to reinvent with tech, data and AI,” said Julie Sweet, chair and CEO of Accenture. “By integrating NVIDIA’s catalog of workflows into Accenture’s AI Refinery, we can help our clients develop custom AI systems at speed and reimagine how they do business and serve their customers to drive stronger business outcomes and create new value.”

    Global consulting firm Deloitte will integrate NVIDIA NIM Agent Blueprints into its deep portfolio of NVIDIA-powered solutions.

    “While many organizations are still working to fully harness the potential of GenAI, its implementation is steadily enhancing efficiencies and productivity,” said Jason Girzadas, CEO of Deloitte US. “By embedding NVIDIA’s NIM Agent Blueprints into enterprise solutions that are built on NVIDIA NIM microservices, Deloitte is engaging with our clients to innovate faster, unlock new growth opportunities and define AI-competitive advantage.”

    IT consulting and digital services provider SoftServe is integrating NIM Agent Blueprints into its generative AI portfolio to speed enterprise adoption.

    “Every enterprise knows generative AI is central to modernizing their operations, but not every enterprise knows where to begin their generative AI journey,” said Harry Propper, CEO of SoftServe. “Adding NVIDIA NIM Agent Blueprints into the SoftServe Gen AI Solutions portfolio gives clients proven frameworks for developing AI applications that put their own data to work.”

    A solution provider for the majority of Fortune 100 companies, WWT will assist enterprises in building NIM Agent Blueprints that leverage their business data.

    “WWT is committed to helping enterprises harness the power of AI as a catalyst for business transformation,” said Jim Kavanaugh, cofounder and CEO of World Wide Technology. “WWT’s AI Proving Ground, equipped with NVIDIA NIM Agent Blueprints and coupled with our data scientists, consultants and high-performance architecture engineers, offers a comprehensive resource for our clients to experiment with, validate and scale AI solutions.”

    Enterprises can develop and deploy NIM Agent Blueprints on NVIDIA AI platforms with compute, networking and software provided by NVIDIA’s global server manufacturing partners.

    These include Cisco Nexus HyperFabric AI clusters with NVIDIA, the Dell AI Factory with NVIDIA, NVIDIA AI Computing by HPE and HPE Private Cloud AI, as well as Lenovo Hybrid AI solutions powered by NVIDIA.

    “Cisco, together with NVIDIA, created a revolutionary, flexible and simple-to-deploy AI infrastructure with Nexus HyperFabric,” said Chuck Robbins, chair and CEO of Cisco. “Combining Cisco innovation with NVIDIA NIM Agent Blueprints offers customers a simple and secure way to deploy generative AI fast and efficiently, with the adaptability they need to build and customize new applications at scale.”

    “Dell Technologies and NVIDIA are making it easy for enterprises to unlock the power of AI-enabled applications and agents,” said Michael Dell, founder and CEO of Dell Technologies. “Incorporating NVIDIA NIM Agent Blueprints into the Dell AI Factory with NVIDIA provides an express lane for the transformative value of AI.”

    “HPE and NVIDIA are expanding on our recent blockbuster collaboration to deliver NVIDIA AI Computing by HPE,” said Antonio Neri, president and CEO of Hewlett Packard Enterprise. “By integrating NVIDIA NIM Agent Blueprints into our co-developed turnkey HPE Private Cloud AI solution, we will enable enterprises to focus resources on developing new AI use cases that boost productivity and unlock new revenue streams.”

    “Generative AI is a full-stack challenge that requires accelerated infrastructure, specialized software and services, and powerful AI-ready devices that can maximize the capabilities of Hybrid AI,” said Yuanqing Yang, chairman and CEO of Lenovo. “NVIDIA NIM Agent Blueprints, combined with Lenovo’s comprehensive, end-to-end portfolio, give enterprises a head start for building generative AI applications that they can run everywhere on Lenovo Hybrid AI.”

    Enterprises can experience NVIDIA NIM Agent Blueprints today.

    Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

    The post NVIDIA and Global Partners Launch NIM Agent Blueprints first appeared on AI-Tech Park.

    ]]>