Cloud Computing Platforms - AI-Tech Park https://ai-techpark.com AI, ML, IoT, Cybersecurity News & Trend Analysis, Interviews Wed, 28 Aug 2024 05:28:42 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.16 https://ai-techpark.com/wp-content/uploads/2017/11/cropped-ai_fav-32x32.png Cloud Computing Platforms - AI-Tech Park https://ai-techpark.com 32 32 InFlux Debuts FluxAI to Showcase FluxEdge’s Advanced Capabilities https://ai-techpark.com/influx-debuts-fluxai-to-showcase-fluxedges-advanced-capabilities/ Tue, 27 Aug 2024 17:30:00 +0000 https://ai-techpark.com/?p=177781 InFlux, a leading global decentralized technology company specializing in cloud infrastructure, artificial intelligence, and decentralized cloud computing services, has today announced the launch of its new AI application. FluxAI leverages generative artificial intelligence technology to mimic human interaction in answering questions and completing tasks. It also comes with a code...

The post InFlux Debuts FluxAI to Showcase FluxEdge’s Advanced Capabilities first appeared on AI-Tech Park.

]]>
InFlux, a leading global decentralized technology company specializing in cloud infrastructure, artificial intelligence, and decentralized cloud computing services, has today announced the launch of its new AI application. FluxAI leverages generative artificial intelligence technology to mimic human interaction in answering questions and completing tasks. It also comes with a code assist feature that helps users write code or solve coding challenges. The AI platform offers the unique advantages of speed and affordability, improved response to enhance user-experience and a generous free plan. Users can also access more advanced features with a premium version that costs only $5 monthly.

“I am ecstatic to announce the launch of FluxAI, a groundbreaking AI application designed to showcase the limitless potential of our latest product FluxEdge. Powered by a decentralized GPU network, it has been meticulously crafted by our global community, resulting in a diverse and scalable infrastructure. After six months of relentless dedication and innovation, we are thrilled to share this cutting-edge technology with the world, and particularly with our passionate Flux community. As FluxAI and FluxEdge continue to thrive and grow, we are confident that this will not only revolutionize the industry but also contribute to the flourishing of the Flux ecosystem.”—Daz Williams, Chief AI Officer, InFlux Technologies.

The launch of FluxAI also introduces a crucial privacy aspect unique to decentralized technologies like InFlux. Unlike conventional AI platforms that exploit users’ data to train and fine-tune their models, FluxAI is 100% private. While conversation histories are preserved for improved interaction and accessibility, user data remains private and is not utilized by the FluxAI model in any way. This is particularly significant for organizations and enterprises dealing with proprietary or sensitive data, as it ensures the utmost privacy and security preventing their data from appearing in future versions.

FluxAI is powered by state-of-the-art, open-source LLMs (large language models), supporting the latest advancements in artificial intelligence. This advanced technology ensures that FluxAI is at the forefront of AI innovation. Furthermore, API integration for the application will be available towards the end of the year, catering to diverse development needs and further expanding the frontiers of AI applications.

The release highlights the multi-faceted offerings of the InFlux ecosystem and showcases the stellar computational capabilities of FluxEdge to power AI applications for businesses and enterprises.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post InFlux Debuts FluxAI to Showcase FluxEdge’s Advanced Capabilities first appeared on AI-Tech Park.

]]>
mimik Joins NVIDIA Inception to Proliferate Hybrid Edge AI Deployments https://ai-techpark.com/mimik-joins-nvidia-inception-to-proliferate-hybrid-edge-ai-deployments/ Thu, 22 Aug 2024 10:45:00 +0000 https://ai-techpark.com/?p=177330 mimik, a pioneer in hybrid edge cloud computing announced it has joined NVIDIA Inception, a program that nurtures startups revolutionizing industries with technological advancements. mimik is focused on bringing advanced AI agents and AI-driven workflow capabilities to the edge enabling more efficient, secure, and privacy-preserving computing ecosystems. The company’s flagship...

The post mimik Joins NVIDIA Inception to Proliferate Hybrid Edge AI Deployments first appeared on AI-Tech Park.

]]>
mimik, a pioneer in hybrid edge cloud computing announced it has joined NVIDIA Inception, a program that nurtures startups revolutionizing industries with technological advancements.

mimik is focused on bringing advanced AI agents and AI-driven workflow capabilities to the edge enabling more efficient, secure, and privacy-preserving computing ecosystems. The company’s flagship product, mimik ai, is a universal cloud-native operating environment for hybrid edge AI agents. With its core edgeEngine and suite of AI microservices, mimik ai provides an offline-first approach that reduces cloud cost and dependency, multi-AI communication, ad hoc service mesh, and enhanced security and privacy marking a significant leap forward in cross-platform hybrid edge AI.

As part of NVIDIA Inception, mimik ai is poised to accelerate hybrid edge AI agent deployment and seamless workflow automation, integrating effortlessly into NVIDIA’s ecosystem to drive intelligent solutions across industries.

“Joining NVIDIA Inception marks a significant milestone in our mission to unleash hybrid edge AI,” said Fay Arjomandi, founder and CEO of mimik. “This program will help empower us to push the boundaries of what’s possible with hybrid edge AI, so we can ultimately deliver more value to our customers and developers across various industries.”

The program also offers mimik the opportunity to collaborate with industry-leading experts and other AI-driven organizations, fostering new product delivery in the hybrid edge AI space. Sam Miri, CRO at mimik, added, “The resources and network provided by NVIDIA Inception align perfectly with our growth strategy. We’re excited to leverage this opportunity to scale our business and bring the benefits of mimik ai operating environment (mimOE) to a wider range of industries, unleashing a new level of collaboration and context-aware intelligence across NVIDIA platforms. This will drive both industry success and our revenue growth.”

NVIDIA Inception helps startups during critical stages of product development, prototyping and deployment. Every Inception member gets a custom set of ongoing benefits, such as NVIDIA Deep Learning Institute credits, preferred pricing on NVIDIA hardware and software, and technological assistance, which provides startups with the fundamental tools to help them grow.

To learn more about mimik ai, visit https://mimik.com and download mimOE.ai at https://developer.mimik.com. Request a demo or contact our sales team today to experience the future of AI integration with mimik ai.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post mimik Joins NVIDIA Inception to Proliferate Hybrid Edge AI Deployments first appeared on AI-Tech Park.

]]>
Nerdio supports Multi Entra ID, boosts reach with strategic partnerships https://ai-techpark.com/nerdio-supports-multi-entra-id-boosts-reach-with-strategic-partnerships/ Wed, 21 Aug 2024 08:45:00 +0000 https://ai-techpark.com/?p=177095 Nerdio, a premier solution for organizations of all sizes looking to manage and cost-optimize native Microsoft cloud technologies, today announced support for multiple Entra ID tenants in its flagship product, Nerdio Manager for Enterprise, allowing organizations to link and manage Azure Virtual Desktop (AVD) deployments that span multiple Entra ID...

The post Nerdio supports Multi Entra ID, boosts reach with strategic partnerships first appeared on AI-Tech Park.

]]>
Nerdio, a premier solution for organizations of all sizes looking to manage and cost-optimize native Microsoft cloud technologies, today announced support for multiple Entra ID tenants in its flagship product, Nerdio Manager for Enterprise, allowing organizations to link and manage Azure Virtual Desktop (AVD) deployments that span multiple Entra ID tenants from a single console. This powcerful new feature streamlines the management of Azure environments by centralizing control, improving visibility, and simplifying administrative tasks for enterprise organizations with multiple Entra ID tenants.

“With Multi-Tenant Instances in Nerdio Manager for Enterprise, our organization is capable of handling complex AVD deployments far more efficiently,” said Garion Brown, Global Vice President of Platform Engineering, Teleperformance. We recommended this feature to Nerdio, and it has provided us with a unified view of all our instances of Azure, significantly reducing the need to switch between different accounts in the Azure portal.”

Nerdio continues its market-leading growth in the first half of 2024, underscoring the increasing demand for its innovative cloud management solutions among enterprises, particularly as organizations look to modernize their cloud end-user computing solutions. New customers such as the British Columbia Lottery Corporation, Leggett & Platt Inc., and Osceola County School District join leading organizations including Make-a-Wish UK, The University of North Florida, Chevron, The Government of Alberta, and Equitable Bank, reaping the benefits of Nerdio Manager for Enterprise. 

“We are seeing tremendous growth in the DaaS market,” said Vadim Vladimirskiy, CEO, Nerdio. “Globally, organizations are increasingly acknowledging the benefits of AVD, offering secure and efficient remote work access while reducing overall expenses. This growth reflects a broader shift towards cloud-based solutions as businesses prioritize flexibility, security, and cost-efficiency in their operations.”

Building on its momentum, Nerdio has expanded its reach through strategic partnerships with Carahsoft and Kyndryl. By teaming up with Carahsoft, Nerdio offers Nerdio Manager for Enterprise to the public sector, leveraging Carahsoft’s extensive network of reseller partners and NASA SEWP V contracts to extend cloud technology solutions to government agencies nationwide. Additionally, the partnership with Kyndryl, the world’s largest IT infrastructure services provider, supports businesses and IT modernization for customers. The collaboration enhances Kyndryl’s capabilities in delivering tailored solutions across Azure Virtual Desktop, Windows 365, and Microsoft Intune to meet customers’ unique environments and business needs.

The industry has taken notice of Nerdio’s continued growth, resulting in several accolades. Nerdio was named the 2024 Microsoft Americas Partner of the Year, recognized in the inaugural CRN AI 100 list, and won Silver in the Stevies for Cloud Application/Service. Nerdio’s CEO, Vadim, was also recognized as Entrepreneur of the Year 2024 for the Midwest region. Further cementing its reputation, Nerdio garnered multiple badges in the G2 Summer rankings, including Momentum Leader, Best Results, Best Support, and Leader in the Cloud VDI and DaaS reports.

To learn more about Nerdio, please visit www.getnerdio.com

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post Nerdio supports Multi Entra ID, boosts reach with strategic partnerships first appeared on AI-Tech Park.

]]>
Cloud Foundry Foundation Announces New Governing Board Chairman https://ai-techpark.com/cloud-foundry-foundation-announces-new-governing-board-chairman/ Wed, 14 Aug 2024 11:00:00 +0000 https://ai-techpark.com/?p=176369 Stephan Klevenz, technical lead at SAP SE, brings longtime advocacy to leadership role Cloud Foundry Foundation today announced that Stephan Klevenz, technical lead, SAP SE, has been named the new chair of its Governing Board. Klevenz succeeds Catherine McGarvey, former vice president of software engineering, VMware.  “Cloud Foundry has been a pioneer in developer...

The post Cloud Foundry Foundation Announces New Governing Board Chairman first appeared on AI-Tech Park.

]]>
Stephan Klevenz, technical lead at SAP SE, brings longtime advocacy to leadership role

Cloud Foundry Foundation today announced that Stephan Klevenz, technical lead, SAP SE, has been named the new chair of its Governing Board. Klevenz succeeds Catherine McGarvey, former vice president of software engineering, VMware.

 “Cloud Foundry has been a pioneer in developer technologies for cloud applications and is experiencing a renaissance with the rapid adoption of Kubernetes and cloud-native computing,” said Klevenz. “We’re pushing ahead with new technologies through projects like Paketo Buildpacks and Korifi that continue making things simpler for developers. Our users consistently tell us that when it comes to Cloud Foundry, ‘It just runs.’ This reliability is at the heart of what we do, and I look forward to working with our community as we continue to innovate in these exciting times.”

The Cloud Foundry Foundation Governing Board is responsible for the oversight and management of the Foundation’s business affairs, property, and interests, while technical decision-making authority is vested in the Foundation’s Technical Oversight Committee and working groups. Other governing board members include representatives from Comcast, SAP, and VMware Tanzu.

“Having Stephan lead our governing board brings a wealth of experience to further enhance the developer experience with virtual machines (VMs) and Kubernetes,” said Chris Clark, program manager for Cloud Foundry Foundation. “Along with this product expertise, Stephan brings great enthusiasm and passion to lead our Governing Board.”

At SAP, Klevenz focuses on Cloud Foundry topics for the SAP Business Technology Platform. His career with SAP extends 20 years where he has worked on various software engineering projects involving open source.

Cloud Foundry is an open source technology backed by the largest tech companies in the world, including IBM, SAP, and VMware, and is being used by leaders in manufacturing, telecommunications, and financial services. Only Cloud Foundry delivers the velocity needed to continuously deliver apps at the speed of business. Cloud Foundry’s container-based architecture runs apps in any language on a choice of cloud platforms — Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, OpenStack, and more. With a robust services ecosystem and simple integration with existing technologies, Cloud Foundry is the modern standard for mission-critical apps for global organizations.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post Cloud Foundry Foundation Announces New Governing Board Chairman first appeared on AI-Tech Park.

]]>
CSA Addresses Using AI for Offensive Security in New Report https://ai-techpark.com/csa-addresses-using-ai-for-offensive-security-in-new-report/ Wed, 07 Aug 2024 14:30:00 +0000 https://ai-techpark.com/?p=175692 Paper explores the unique transformative potential, challenges, and limitations of Large Language Model (LLM)-powered AI in offensive security Black Hat Conference (Las Vegas) –Today, the Cloud Security Alliance (CSA), the world’s leading organization dedicated to defining standards, certifications, and best practices to help ensure a secure cloud computing environment, released Using Artificial Intelligence (AI)...

The post CSA Addresses Using AI for Offensive Security in New Report first appeared on AI-Tech Park.

]]>
Paper explores the unique transformative potential, challenges, and limitations of Large Language Model (LLM)-powered AI in offensive security

Black Hat Conference (Las Vegas) –Today, the Cloud Security Alliance (CSA), the world’s leading organization dedicated to defining standards, certifications, and best practices to help ensure a secure cloud computing environment, released Using Artificial Intelligence (AI) for Offensive Security. The report, drafted by the AI Technology and Risk Working Group, explores the transformative potential of Large Language Model (LLM)-powered AI by examining its integration into offensive security. Specifically, the report addresses current challenges and showcases AI’s capabilities across five security phases: reconnaissance, scanning, vulnerability analysis, exploitation, and reporting.

“AI is here to transform offensive security, however, it’s not a silver bullet. Because AI solutions are limited by the scope of their training data and algorithms, it’s essential to understand the current state-of-the-art of AI and leverage it as an augmentation tool for human security professionals,” said Adam Lundqvist, a lead author of the paper. “By adopting AI, training teams on potential and risks, and fostering a culture of continuous improvement, organizations can significantly enhance their defensive capabilities and secure a competitive edge in cybersecurity.”

Among the report’s key findings:

  • Security teams face a shortage of skilled professionals, increasingly complex and dynamic environments, and the need to balance automation with manual testing.
  • AI, mainly through LLMs and AI agents, offers significant capabilities in offensive security, including data analysis, code and text generation, planning realistic attack scenarios, reasoning, and tool orchestration. These capabilities can help automate reconnaissance, optimize scanning processes, assess vulnerabilities, generate comprehensive reports, and even autonomously exploit vulnerabilities.
  • Leveraging AI in offensive security enhances scalability, efficiency, speed, discovery of more complex vulnerabilities, and ultimately, the overall security posture.
  • While promising, no single AI solution can revolutionize offensive security today. Ongoing experimentation with AI is needed to find and implement effective solutions. This requires creating an environment that encourages learning and development, where team members can use AI tools and techniques to grow their skills.

As outlined in the report, the utilization of AI in offensive security presents unique opportunities but also limitations. Managing large datasets and ensuring accurate vulnerability detection are significant challenges that can be addressed through technological advancements and best practices. However, limitations such as token window constraints in AI models require careful planning and mitigation today. To overcome these challenges, the report’s authors recommend that organizations incorporate AI to automate tasks and augment human capabilities; maintain human oversight to validate AI outputs, improve quality, and ensure technical advantage; and implement robust governance, risk, and compliance frameworks and controls to ensure safe, secure, and ethical AI use.

“While AI offers significant potential to enhance offensive security capabilities, it’s crucial to acknowledge the difficulties that can arise from its use. Putting appropriate mitigation strategies, such as those covered in this report, in place can help ensure AI’s safe and effective integration into security frameworks,” said Kirti Chopra, a lead author of the paper.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post CSA Addresses Using AI for Offensive Security in New Report first appeared on AI-Tech Park.

]]>
Vultr Announces Launch of Industry Cloud https://ai-techpark.com/vultr-announces-launch-of-industry-cloud/ Wed, 07 Aug 2024 09:45:04 +0000 https://ai-techpark.com/?p=175610 New industry-specific capabilities empower organizations with scalable, cost-effective, high-performance AI and cloud solutions worldwide Vultr, the world’s largest privately held cloud computing platform, today announced the launch of its industry cloud solution, which delivers industry-vertical, cutting-edge cloud computing solutions that meet specific industry needs and regulatory requirements across the retail, manufacturing, healthcare,...

The post Vultr Announces Launch of Industry Cloud first appeared on AI-Tech Park.

]]>
New industry-specific capabilities empower organizations with scalable, cost-effective, high-performance AI and cloud solutions worldwide

Vultr, the world’s largest privately held cloud computing platform, today announced the launch of its industry cloud solution, which delivers industry-vertical, cutting-edge cloud computing solutions that meet specific industry needs and regulatory requirements across the retail, manufacturing, healthcare, media, telecommunications, and finance sectors. Leveraging Vultr’s global cloud infrastructure spanning six continents and 32 cloud data center locations – including Vultr Cloud GPU accelerated by NVIDIA GPUs for artificial intelligence (AI) and machine learning (ML) – Vultr industry cloud optimizes specific industry sectors’ infrastructure and operations around the world.

Amid economic and geopolitical uncertainty, digital transformation, and the need for rapid innovation driving cloud adoption across industry sectors, Vultr is the first independent global cloud computing platform to provide enterprises across core sectors with specialized, composable cloud platforms, available in all regions across the globe, tailored to specific industry sectors’ unique AI and digital transformation needs and local regulatory, compliance, and data governance requirements.

Vultr industry cloud is now available across key verticals, including:

  • Retail – enabling enterprises to focus on scalable, flexible infrastructure to handle seasonal demand fluctuations and enhance customer experiences.
  • Manufacturing – prioritizing integration with industrial IoT and real-time data processing for production optimization.
  • Healthcare – out-of-the-box availability of robust security and compliance capabilities to protect sensitive patient data and support healthcare applications.
  • Media and Entertainment – availing high-performance computing resources for content creation, rendering, and distribution.
  • Telecommunications – delivering reliable, low-latency infrastructure to support network services and customer applications.
  • Finance – built-in data security, compliance, and real-time processing capabilities to support financial transactions and analytics.

“While our customers come from a diverse range of industries, they all demand excellence when it comes to their cloud solutions. To achieve this, today’s CIOs and CTOs need scalable, reliable, and industry-specific cloud solutions to accelerate their digital and AI transformation,” said Kevin Cochrane, CMO of Vultr. “The launch of Vultr industry cloud provides the first global cloud alternative to dynamic allocation and high-performance cloud computing, GPUs, storage, and network resources to enhance operational efficiency and deliver continuous operations.”

Vultr offers a cost-effective alternative to traditional infrastructure. Unlike traditional cloud platforms, Vultr’s composable infrastructure integrates end-to-end industry cloud capabilities with its core cloud services components:

  • Software as a Service (SaaS) – Vultr’s easy-to-use infrastructure integrates with SaaS offerings through APIs and strategic cloud alliance partnerships, allowing businesses to efficiently deploy and manage various applications and offering accessibility, scalability, and reduced management overhead.
  • Platform as a Service (PaaS) – Providing a robust development and deployment cloud environment, and enabling developers to build, test, and deploy applications without managing the underlying infrastructure, Vultr integrates cloud computing with PaaS, offering tools for streamlined app development, deployment, and multi-cloud flexibility.
  • Infrastructure as a Service (IaaS) – Vultr’s IaaS offerings include scalable bare metal, Cloud GPU, virtual machines, storage solutions, networking services, containers, and database management, providing flexible infrastructure for various workloads.
  • Data Fabrics – Vultr represents data fabric through its scalable cloud infrastructure, unifying data management across diverse environments. With services like compute instances, block storage, and managed databases, Vultr enables seamless data integration and efficient governance.
  • Marketplaces and App Stores – Vultr Marketplace features a wide array of pre-configured applications and services, simplifying the process of finding and deploying cloud solutions.
  • Compliance and Security – Vultr offers robust security features and compliance tools, ensuring data protection and adherence to regional and industry-specific regulations.
  • Edge Computing – Vultr’s edge computing solutions bring computing power closer to the data source, reducing latency and enhancing performance for real-time applications.

“Vultr industry cloud reaffirms our commitment to supporting enterprises, giving them the adaptability they need to cope with accelerating industry disruptions and unique requirements,” Cochrane added. “With tailored, industry-specific cloud capabilities, our customers can accelerate digital transformation and achieve differentiation faster than the competition.”

Learn more about Vultr industry cloud and contact sales to get started.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post Vultr Announces Launch of Industry Cloud first appeared on AI-Tech Park.

]]>
Next-Gen Ampere® Instances Available on Oracle Cloud Infrastructure https://ai-techpark.com/next-gen-ampere-instances-available-on-oracle-cloud-infrastructure/ Tue, 06 Aug 2024 10:30:00 +0000 https://ai-techpark.com/?p=175461 New OCI Ampere A2 Compute instances powered by AmpereOne® deliver best in class price-per-performance. Ampere and Oracle Cloud Infrastructure (OCI) announced today the launch of second-generation Ampere-based compute instances, OCI Ampere A2 Compute, based on the AmpereOne® family of processors. The new offering builds upon the success of OCI Ampere A1 Compute instances,...

The post Next-Gen Ampere® Instances Available on Oracle Cloud Infrastructure first appeared on AI-Tech Park.

]]>
New OCI Ampere A2 Compute instances powered by AmpereOne® deliver best in class price-per-performance.

Ampere and Oracle Cloud Infrastructure (OCI) announced today the launch of second-generation Ampere-based compute instances, OCI Ampere A2 Compute, based on the AmpereOne® family of processors. The new offering builds upon the success of OCI Ampere A1 Compute instances, which have been adopted by OCI customers and deployed across over 100 OCI services, including Oracle Database services like HeatWave, MySQL, as well as Oracle Cloud Applications.

OCI Ampere A2 Compute instances provide higher core count virtual machines and high-density containers within a single server, delivering more performance, scalability, and cost-efficiency for cloud native workloads. In addition, the OCI Ampere A2 Compute series further extends OCI’s lead in both Arm-based cloud computing and price-for-performance.

“OCI and Ampere began our collaboration with the ground-breaking A1 shapes. We’ve demonstrated the versatility of these shapes on a wide range of workloads from general purpose applications and OCI services to the most recently announced and highly demanding use case: Llama3 generative AI services,” said Jeff Wittich, Chief Product Officer at Ampere. “Building on this momentum, the new OCI Ampere A2 Compute shapes using our AmpereOne® processors are set to create a new baseline in price-performance for the cloud industry across an ever-expanding variety of cloud native workloads and instance types.” 

Key features, pricing and performance metrics for the OCI Ampere A2 Compute instances include:

  • Up to 78 OCPUs (1 OCPU = 2 AmpereOne® cores, 156 cores total)
  • Up to 946 GB of DDR5 memory with 25% more bandwidth compared to A1
  • Flexible VM sizes, with up to 946 GB of memory, and block storage boot volumes up to 32TB
  • Networking bandwidth of up to 78 Gbps Ethernet and up to 24 VNICs
  • Testing shows that our customers may see up to 2X better price performance vs comparable x86-based shapes
  • Leadership Arm-based cloud compute pricing at $0.014/OCPU/hour, and $0.002/GB/hour
  • Oracle’s Flex Shapes allow customers to tune the core count of their shape based on their actual workloads for even more savings

Like OCI Ampere A1 instances, OCI Ampere A2 Compute instances show strong performance for multiple AI functions, including generative AI. This performance is made possible through joint development efforts between Ampere Computing and Oracle Cloud Infrastructure (OCI) that have recently delivered up to 152% performance gain over the previous upstream llama.cpp open-source implementation.

Beyond AI, OCI Ampere A1 and A2 Compute instances are also well-suited for other cloud native workloads, such as analytics and databases, media services, video streaming, and web services. They offer the linear scalability, low latency, density and predictable performance these workloads need, bringing more performance and higher cost savings. For example, when deploying a typical web service on OCI Ampere A2 Compute, using very popular applications such as MySQL, NGINX, Cassandra, and Redis, the savings can be very compelling. An enterprise spending $50M annually across a weighted blend of these popular web service components could save up to $21.4M in cloud infrastructure costs compared with OCI E5 x86 based shapes. The result can save up to 43% less in infrastructure costs, 30% reduction in power consumption, and 33% less carbon emissions.

OCI Ampere A1 and A2 Compute shapes represent a significant advancement in cloud computing by lowering general purpose cloud computing costs, addressing AI computing efficiency, providing a more predictable and linearly scalable compute resource, and helping companies achieve ESG goals faster. Visit the OCI Ampere Compute website to get started.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post Next-Gen Ampere® Instances Available on Oracle Cloud Infrastructure first appeared on AI-Tech Park.

]]>
FICO Recognized as Trusted Cloud Provider by Cloud Security Alliance https://ai-techpark.com/fico-recognized-as-trusted-cloud-provider-by-cloud-security-alliance/ Wed, 31 Jul 2024 15:04:50 +0000 https://ai-techpark.com/?p=174911 FICO obtained Cloud Security Alliance (CSA) Security, Trust & Assurance Registry (STAR) Level 1 and Level 2 certifications, demonstrating its commitment to security and transparency Global analytics software leader FICO today announced that it has achieved Cloud Security Alliance (CSA) Security, Trust & Assurance Registry (STAR) Level 1 and Level 2 certifications, solidifying its...

The post FICO Recognized as Trusted Cloud Provider by Cloud Security Alliance first appeared on AI-Tech Park.

]]>
FICO obtained Cloud Security Alliance (CSA) Security, Trust & Assurance Registry (STAR) Level 1 and Level 2 certifications, demonstrating its commitment to security and transparency

Global analytics software leader FICO today announced that it has achieved Cloud Security Alliance (CSA) Security, Trust & Assurance Registry (STAR) Level 1 and Level 2 certifications, solidifying its status as a trusted cloud provider via the Cloud Security Alliance (CSA). CSA is the world’s leading organization dedicated to defining and raising awareness of best practices to help ensure a secure cloud computing environment.

“We’re thrilled to be recognized by CSA as a Trusted Cloud Provider,” said Ben Nelson, FICO’s chief cybersecurity officer. “This is yet another validation of our commitment to the security of our clients’ data and the services we provide to them. It reflects our dedication to delivering the most secure cloud environment for our clients.”

CSA STAR is a free, publicly accessible registry that documents the security and privacy controls provided by popular cloud computing offerings. It encompasses the key principles of transparency, rigorous auditing, and harmonization of standards outlined in Cloud Controls Matrix (CCM) and allows organizations to show current and potential customers their security and compliance posture, including the regulations, standards, and frameworks to which they adhere. Developed to ensure cloud service providers are better able to maintain data confidentiality, integrity, and availability, CSA STAR is a powerful program for security assurance in the cloud.

“The CSA Trusted Cloud Provider trustmark is a significant achievement that highlights an organization’s dedication to maintaining the highest standards in cloud security practices,” said Jim Reavis, CEO and co-founder of the Cloud Security Alliance. “FICO not only meets these rigorous standards but exceeds them, exemplifying the best in secure cloud computing and setting a benchmark for the industry.”

Participation in the STAR program provides multiple benefits, including indications of best practices and validation of security posture of cloud offerings. It consists of two levels of assurance (self-assessment and third-party certification), based upon:

  • The CSA Cloud Controls Matrix (CCM) v4, a cybersecurity control framework for cloud computing. It is composed of 197 control objectives that are structured in 17 domains covering all key aspects of cloud technology. It can be used as a tool for the systematic assessment of a cloud implementation and provides guidance on which security controls should be implemented by which actor within the cloud supply chain. The controls framework is aligned to the CSA Security Guidance for Cloud Computing.
  • General Data Protection Regulation (GDPR) Compliance with the EU Cloud Code of Conduct (CoC).

View FICO’s CSA Registry entry here: https://cloudsecurityalliance.org/star/registry/fico/services/fico

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post FICO Recognized as Trusted Cloud Provider by Cloud Security Alliance first appeared on AI-Tech Park.

]]>
DigitalOcean Hires Wade Wegner as Chief Ecosystem and Growth Officer https://ai-techpark.com/digitalocean-hires-wade-wegner-as-chief-ecosystem-and-growth-officer/ Tue, 09 Jul 2024 10:30:00 +0000 https://ai-techpark.com/?p=172114 Wegner will strengthen DigitalOcean’s position within the developer community and further fuel growth DigitalOcean Holdings, Inc. (NYSE: DOCN), the developer cloud optimized for startups and growing technology businesses, announced today that Wade Wegner joined the company as Chief Ecosystem and Growth Officer. This unique and critical position bridges the gap...

The post DigitalOcean Hires Wade Wegner as Chief Ecosystem and Growth Officer first appeared on AI-Tech Park.

]]>
Wegner will strengthen DigitalOcean’s position within the developer community and further fuel growth

DigitalOcean Holdings, Inc. (NYSE: DOCN), the developer cloud optimized for startups and growing technology businesses, announced today that Wade Wegner joined the company as Chief Ecosystem and Growth Officer. This unique and critical position bridges the gap between research and development and go-to-market strategy. In this executive role, Wegner will oversee Developer Relations, Marketing, Growth, and Partnerships.

“Wade is a transformative technology leader who blends deep technical expertise with strategic vision,” said Paddy Srinivasan, CEO DigitalOcean. “He brings robust developer relations and developer ecosystem experience to our team, at a time when the development landscape is rapidly evolving with AI/ML. Our community content is also central to our mission. Every month, millions of people rely on our content to solve problems and drive innovation. We offer thousands of tutorials and have a vibrant community of users who generate more content each day. We are excited to bring Wade on board to help us continue to develop this community to support developers.”

Throughout his career, Wegner has demonstrated a knack for leading high-impact teams, driving product growth and fostering innovation. He has helped build some of the strongest developer communities for Microsoft Azure, Salesforce, and Heroku. From his roots as a developer, Wegner has consistently shown that he can build and lead teams that push the boundaries of what’s possible in product development, developer relations, and strategy. He comes to DigitalOcean with extensive experience from RapidAPI, Twilio, Salesforce, and Microsoft.

“I’m thrilled to join DigitalOcean, a platform that has consistently championed our devoted developer community,” said Wegner. “My vision is to amplify DigitalOcean’s mission by fostering developer advocacy at every touchpoint, expanding our partnership network and startup ecosystem, and ensuring seamless alignment between our internal strategy and the vibrant external community we serve. I’m committed to driving innovation that empowers developers and startups, making cloud computing more accessible in an age of AI for creators worldwide.”

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post DigitalOcean Hires Wade Wegner as Chief Ecosystem and Growth Officer first appeared on AI-Tech Park.

]]>
GenAI Reshaping Cloud Computing Market https://ai-techpark.com/genai-reshaping-cloud-computing-market/ Tue, 02 Jul 2024 07:30:00 +0000 https://ai-techpark.com/?p=171589 As GenAI matures, AI-specific cloud platforms are expected to emerge, new ISG reports say Generative AI is reshaping the cloud computing market and is expected to lead to more AI workload-specific cloud platforms in the future, a new research report from leading global technology research and advisory firm Information Services...

The post GenAI Reshaping Cloud Computing Market first appeared on AI-Tech Park.

]]>
As GenAI matures, AI-specific cloud platforms are expected to emerge, new ISG reports say

Generative AI is reshaping the cloud computing market and is expected to lead to more AI workload-specific cloud platforms in the future, a new research report from leading global technology research and advisory firm Information Services Group (ISG) (Nasdaq: III) says.

The ISG Buyers Guides for Cloud Platforms, produced by ISG Software Research (formerly Ventana Research), say cloud technology, due to its scale and shared services model, is best suited for the delivery of GenAI-enabled applications at scale and the development of general-purpose foundation models.

The reports note specialty cloud providers will become an important consideration for many enterprise cloud architectures. The emergence of domain-specific models, which can be smaller, less computationally intensive and lower the hallucination risks associated with general-purpose models, suggests a future where cloud platforms may become increasingly specialized, the reports say.

“As GenAI continues to evolve, we can expect to see more AI workload-specific cloud platforms in the future,” said Jeff Orr, director for digital technology, ISG Software Research. “GenAI applications require extensive computational resources, and cloud computing platforms provide the scalability enterprises need to allocate resources dynamically based on the needs of various GenAI workloads. This has led to a reshaping of the cloud computing market, with specialized infrastructures catering to computational giants like large language models (LLMs).”

The reports say that by migrating to the cloud, businesses can streamline operations, reduce costs and accelerate innovation. To meet the efficiency and innovation challenge, enterprises are expected to adopt multiple clouds.

ISG projects that by 2027, over three-quarters of enterprises will operate across multiple public cloud computing environments, necessitating the requirement for a unified data platform to virtualize access for business continuity.

When an enterprise CIO or IT leader is considering a cloud platform, the choice between public, private, hybrid, multi-cloud or industry/sovereign cloud should be driven by the organization’s specific objectives, goals and desired outcomes.

“The conversation in enterprises has shifted from ‘Why cloud?’ to which types of cloud for which workloads,” said Orr. “A diversity in cloud service providers now exists to align with specific industry and business requirements rather than a one-size-fits-all agreement.”

In addition to aligning with strategic objectives and operational needs, it is also important for enterprises to consider factors such as cost, resource availability, technical expertise and the potential need for digital transformation when choosing cloud platforms. A well-chosen cloud strategy, the reports say, can drive innovation, enhance customer experiences and provide a competitive edge in the digital marketplace.

The reports cite other benefits of using a cloud service provider over building on-premises servers. They include low capital outlay, faster time-to-market, agility and optimal cloud delivery models from Infrastructure-as-a-Service (IaaS) to Platform-as-a-Service (PaaS) to Software-as-a-Service (SaaS). While on-premises servers will not go away entirely, they must continue to compete effectively with cloud computing as an alternative for many applications and services, the reports say.

The ISG Buyers Guide for Cloud Platforms is designed to provide a holistic view of a software provider’s ability to serve a variety of cloud workloads with a set of cloud platform products. As such the Buyers Guide includes the full breadth of deployment models, services and functionality.

For its 2024 Buyers Guides for Cloud Platforms, ISG assessed software providers across four cloud platform categories – Cloud Computing Platforms, Public Cloud Platforms, Private Cloud Platforms and Hybrid Cloud Platforms – and produced a separate Buyers Guide for each. A total of 13 providers were assessed: Akamai, Alibaba Cloud, Amazon Web Services (AWS), DigitalOcean, Google, Huawei Cloud, IBM, Leaseweb, Microsoft, Oracle, OVHcloud, Tencent and Vultr.

ISG Software Research designates the top three software providers as Leaders in each category. For the 2024 study, the leading providers in ranked order are:

Cloud Computing Platforms: Microsoft, AWS and Google

Public Cloud Platforms: Microsoft, AWS and Google

Private Cloud Platforms: Microsoft, Google and Oracle

Hybrid Cloud Platforms: Microsoft, AWS and Google

”Cloud computing has become as ubiquitous as the power and water in your home or business, but the software platforms for providing this service are still in an evolutionary transition to be able to interoperate across systems in an enterprise and across multiple enterprises,” says Mark Smith, partner of Software Research at ISG. “This Buyers Guide gives enterprises that need to consolidate and simplify their cloud platforms a better understanding of the current cloud landscape.”

The ISG Buyers Guides for Cloud Platforms are the distillation of more than a year of market and product research efforts. The research is not sponsored nor influenced by software providers and is conducted solely to help enterprises optimize their business and IT software investments.

Visit this webpage to learn more about the Buyers Guides for Cloud Platforms and read executive summaries of each of the four reports. The complete reports, including provider rankings across seven product and customer experience dimensions and detailed research findings on each provider, are available by contacting ISG Software Research.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post GenAI Reshaping Cloud Computing Market first appeared on AI-Tech Park.

]]>