Cloud Computing - AI-Tech Park https://ai-techpark.com AI, ML, IoT, Cybersecurity News & Trend Analysis, Interviews Fri, 30 Aug 2024 08:15:09 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.16 https://ai-techpark.com/wp-content/uploads/2017/11/cropped-ai_fav-32x32.png Cloud Computing - AI-Tech Park https://ai-techpark.com 32 32 Ubiq announced new patents for reliability & data mobility https://ai-techpark.com/ubiq-announced-new-patents-for-reliability-data-mobility/ Fri, 30 Aug 2024 09:45:00 +0000 https://ai-techpark.com/?p=178179 UBIQ Network, with its patented decentralized architecture, provides an alternative to conventional cloud computing, providing increased reliability and data mobility at a lower cost. The emergence of cloud computing, or the on-demand availability of computer system resources such as storage and computing power, has provided so much convenience for individuals...

The post Ubiq announced new patents for reliability & data mobility first appeared on AI-Tech Park.

]]>
UBIQ Network, with its patented decentralized architecture, provides an alternative to conventional cloud computing, providing increased reliability and data mobility at a lower cost.

The emergence of cloud computing, or the on-demand availability of computer system resources such as storage and computing power, has provided so much convenience for individuals and organizations worldwide. It has allowed people to access their data from anywhere, as long as they can connect to the internet. Cloud computing has also enabled businesses to have distributed workforces, often involving hybrid and remote work. This has allowed them to access a wider talent pool, as well as continue operations even during disruptions such as the COVID-19 pandemic.

While cloud computing has many benefits, there are also drawbacks, the most prominent of which is reliability. Cloud computing is a centralized technology that relies on data centers, which are often large facilities housing numerous servers. This introduces a single point of failure, where, if a data center is compromised, the entire service often goes down, interrupting operations and negatively affecting clients. This has happened with commercial cloud hosting services several times, such as on February 28, 2017, when Amazon Web Services suffered an outage for several hours, resulting in businesses that rely on AWS, becoming inaccessible.

It was this incident that inspired Mr. Steven He to think of a solution that ensures greater reliability while providing the same convenience as cloud computing. He, who has more than 30 years’ experience in the field of IT, including enterprise software and game development, founded Ubiq, Inc. that same year. The company’s mission is to develop the next generation of network computing. His research and efforts have resulted in several patent applications and grants from the US Patent and Trademark Office, with more patent applications filed in other countries.

Ubiq’s flagship product is UBIQ Network, it’s a next generation cloud computing platform that seeks to solve the reliability issues encountered by traditional cloud hosting services by fostering decentralization. According to He, today’s cloud computing is a deviation from the original goal of the internet, which was based on the US Department of Defense’s ARPANET. During the Cold War, the US wanted to decentralize its information network, preventing its capabilities from being crippled by a singular strike on headquarters.

However, He believes that cloud computing is a move in the reverse direction, reintroducing a single point of failure. Through UBIQ Network, he aims to provide an alternative to traditional cloud computing services, providing exceptional service in terms of reliability and data mobility. Ubiq has been granted patents in both these areas, signifying a major improvement over existing technologies. The company’s motto is SAFER, which stands for Serve All Fairly, Effectively, and Reliably.

With regard to reliability, He explains that it is measured in terms of uptime “nines”, where 99% uptime is “two nines” while 99.9% and 99.99% are correspondingly “three nines” and “four nines”. While four nines seems to be a very high level of uptime, it still corresponds to around 52 minutes of downtime each year. For crucial sectors such as financial services, utilities, and defense, a reliability of five nines is needed, which translates to an average of less than six minutes downtime per year.

According to He, with UBIQ Network’s patented decentralized architecture, it is able to provide a level of reliability commensurate to the client’s needs, even up to five nines and more. With conventional cloud computing, increasing reliability from four nines to five nines entails an exponential increase in cost. However, for UBIQ Network, it is just a linear increase, resulting in more savings for clients. The network now has more than 40 nodes worldwide, primarily in the US, China, Canada, Germany and Hong Kong, with multiple software applications now running on the platform.

Meanwhile, UBIQ Network also provides data mobility to clients. According to He, with conventional cloud computing, a user’s data resides in a data center in a particular locality. If a user is located near the data center, communication is smooth, but this degrades the further they get from the data center. With UBIQ Network, the data moves alongside the user and is stored in a node geographically closest to them. For example, a user on the US East Coast will likely have their data in a networking node located in New York. If they move to Germany, the data can be moved to nodes in Frankfurt or Berlin, ensuring smoother operation.

“Our vision has been continually evaluated and is now coming to reality,” He says. “In the age of centralized cloud computing, UBIQ Network provides an alternative that ensures better reliability and data mobility. It will not replace cloud computing, as cloud computing will not replace supercomputing. However, UBIQ Network overcomes several pain points and inherent weaknesses of cloud computing, providing end users with reliable, safe, high-performance, and cost-effective online services wherever they are.”

The post Ubiq announced new patents for reliability & data mobility first appeared on AI-Tech Park.

]]>
Edgecore Networks Announces Optimized 400G Spine Switch https://ai-techpark.com/edgecore-networks-announces-optimized-400g-spine-switch/ Fri, 30 Aug 2024 09:15:00 +0000 https://ai-techpark.com/?p=178173 Edgecore Networks, a global leader in open networking solutions, today announced the launch of its 400G-optimized spine switch, the DCS511, The new switch is designed to meet the demanding requirements of data center and cloud computing environments, offering 12.8 Terabits per second switching capacity and low-latency, making it ideal for...

The post Edgecore Networks Announces Optimized 400G Spine Switch first appeared on AI-Tech Park.

]]>
Edgecore Networks, a global leader in open networking solutions, today announced the launch of its 400G-optimized spine switch, the DCS511, The new switch is designed to meet the demanding requirements of data center and cloud computing environments, offering 12.8 Terabits per second switching capacity and low-latency, making it ideal for data centers, service providers, and cloud operators.

The DCS511 is an open network data center switch powered by the Broadcom Tomahawk 4 chipset, offering 32x400G ports on a single device. It can be deployed as a spine switch supporting 100/400 GbE spine-to-spine or spine-to-leaf interconnects.

This open network switch is equipped with the Open Network Install Environment (ONIE), allowing the installation of compatible Network Operating System (NOS) software, including the open-source Open Network Linux, as well as commercial NOS offerings.

Edgecore’s DCS511 32-port Tomahawk4 switch offers a cost-effective solution for lower-capacity data centers seeking 400G options, providing an alternative to 32x400G Tomahawk3 Spine switch. The new DCS511 fabric offers Switching solution with low density 400G ports for General & AI Compute Datacenter workloads. The DCS511 offers the benefits of less port density, allowing organizations to tailor their network infrastructure to their specific needs, avoid over-provisioning, and reduce unnecessary costs.

Edgecore is committed to exploring the possibilities of data center architectures, delivering future-ready solutions for your AI/ML connectivity needs. Get ready to embrace a faster, high-capacity network with Edgecore’s DCS511 – high-capacity-network switch.

Key Features of the DCS511:

  • Up to 16 ports for 400G ZR or ZR+ coherent optics, with a 24-watt power budget per port
  • Supports SONiC NOS for webscale and enterprise grade datacenters
  • Pre-loaded with the Open Network Install Environment (ONIE) for automated loading of compatible open-source and commercial NOS offerings
  • Supports VXLAN routing and bridging compared to TH3 switch
  • Supports cognitive routing / GLB / path rebalancing / RoCEv2 over VXLAN feature for AI and LLM (Larger Language Model)

Availability: The Edgecore DCS511 is now available for order. For more information about DCS511, please visit HERE. For more product information, please contact sales@edge-core.com.

The post Edgecore Networks Announces Optimized 400G Spine Switch first appeared on AI-Tech Park.

]]>
CyberArk Named Trusted Cloud Provider by Cloud Security Alliance https://ai-techpark.com/cyberark-named-trusted-cloud-provider-by-cloud-security-alliance/ Thu, 29 Aug 2024 15:15:00 +0000 https://ai-techpark.com/?p=178107 CyberArk (NASDAQ: CYBR), The identity security company, today announced that it has earned the Trusted Cloud Provider trustmark from the Cloud Security Alliance (CSA), the world’s leading organization dedicated to defining and raising awareness of best practices to help ensure a secure cloud computing environment. The Trusted Cloud Provider trustmark helps organizations to identify providers that have...

The post CyberArk Named Trusted Cloud Provider by Cloud Security Alliance first appeared on AI-Tech Park.

]]>
CyberArk (NASDAQ: CYBR), The identity security company, today announced that it has earned the Trusted Cloud Provider trustmark from the Cloud Security Alliance (CSA), the world’s leading organization dedicated to defining and raising awareness of best practices to help ensure a secure cloud computing environment. The Trusted Cloud Provider trustmark helps organizations to identify providers that have invested to achieve the highest standards of cloud security in their product offerings.

“This internationally-recognized certification from CSA reaffirms CyberArk’s cloud security commitment to organizations across the world,” said Clarence Hinton, chief strategy officer at CyberArk. “CyberArk innovation goes beyond just-in-time access, offering zero standing privileges capabilities for multi-cloud environments. The identity security platform streamlines and safeguards workforce and high-risk user access, like developers, locks down endpoint privileges and protects human and machine credentials in all environments.”

CSA is the world’s leading organization focused on defining and raising awareness about best practices to ensure a secure cloud computing environment. Organizations across the globe recognize the increasing urgency around securing their multi-cloud environments as well as the cloud-based solutions they consume. In a dynamic, AI-powered threat landscape, the Trusted Cloud Provider trustmark is a mark of CyberArk’s identity security leadership and our mission to enable customers to stay ahead of well-funded, innovative cyberattackers by rethinking and modernizing the way in which we secure all identities, both human and machine, with intelligent privilege controls.

“Attaining the CSA Trusted Cloud Provider trustmark is a major accomplishment, showcasing an organization’s commitment to upholding the highest standards in cloud security,” said Jim Reavis, co-founder and CEO of the Cloud Security Alliance. “CyberArk not only meets these stringent requirements but surpasses them, helping customers secure increasingly complex cloud environments.”

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post CyberArk Named Trusted Cloud Provider by Cloud Security Alliance first appeared on AI-Tech Park.

]]>
InFlux Debuts FluxAI to Showcase FluxEdge’s Advanced Capabilities https://ai-techpark.com/influx-debuts-fluxai-to-showcase-fluxedges-advanced-capabilities/ Tue, 27 Aug 2024 17:30:00 +0000 https://ai-techpark.com/?p=177781 InFlux, a leading global decentralized technology company specializing in cloud infrastructure, artificial intelligence, and decentralized cloud computing services, has today announced the launch of its new AI application. FluxAI leverages generative artificial intelligence technology to mimic human interaction in answering questions and completing tasks. It also comes with a code...

The post InFlux Debuts FluxAI to Showcase FluxEdge’s Advanced Capabilities first appeared on AI-Tech Park.

]]>
InFlux, a leading global decentralized technology company specializing in cloud infrastructure, artificial intelligence, and decentralized cloud computing services, has today announced the launch of its new AI application. FluxAI leverages generative artificial intelligence technology to mimic human interaction in answering questions and completing tasks. It also comes with a code assist feature that helps users write code or solve coding challenges. The AI platform offers the unique advantages of speed and affordability, improved response to enhance user-experience and a generous free plan. Users can also access more advanced features with a premium version that costs only $5 monthly.

“I am ecstatic to announce the launch of FluxAI, a groundbreaking AI application designed to showcase the limitless potential of our latest product FluxEdge. Powered by a decentralized GPU network, it has been meticulously crafted by our global community, resulting in a diverse and scalable infrastructure. After six months of relentless dedication and innovation, we are thrilled to share this cutting-edge technology with the world, and particularly with our passionate Flux community. As FluxAI and FluxEdge continue to thrive and grow, we are confident that this will not only revolutionize the industry but also contribute to the flourishing of the Flux ecosystem.”—Daz Williams, Chief AI Officer, InFlux Technologies.

The launch of FluxAI also introduces a crucial privacy aspect unique to decentralized technologies like InFlux. Unlike conventional AI platforms that exploit users’ data to train and fine-tune their models, FluxAI is 100% private. While conversation histories are preserved for improved interaction and accessibility, user data remains private and is not utilized by the FluxAI model in any way. This is particularly significant for organizations and enterprises dealing with proprietary or sensitive data, as it ensures the utmost privacy and security preventing their data from appearing in future versions.

FluxAI is powered by state-of-the-art, open-source LLMs (large language models), supporting the latest advancements in artificial intelligence. This advanced technology ensures that FluxAI is at the forefront of AI innovation. Furthermore, API integration for the application will be available towards the end of the year, catering to diverse development needs and further expanding the frontiers of AI applications.

The release highlights the multi-faceted offerings of the InFlux ecosystem and showcases the stellar computational capabilities of FluxEdge to power AI applications for businesses and enterprises.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post InFlux Debuts FluxAI to Showcase FluxEdge’s Advanced Capabilities first appeared on AI-Tech Park.

]]>
mimik Joins NVIDIA Inception to Proliferate Hybrid Edge AI Deployments https://ai-techpark.com/mimik-joins-nvidia-inception-to-proliferate-hybrid-edge-ai-deployments/ Thu, 22 Aug 2024 10:45:00 +0000 https://ai-techpark.com/?p=177330 mimik, a pioneer in hybrid edge cloud computing announced it has joined NVIDIA Inception, a program that nurtures startups revolutionizing industries with technological advancements. mimik is focused on bringing advanced AI agents and AI-driven workflow capabilities to the edge enabling more efficient, secure, and privacy-preserving computing ecosystems. The company’s flagship...

The post mimik Joins NVIDIA Inception to Proliferate Hybrid Edge AI Deployments first appeared on AI-Tech Park.

]]>
mimik, a pioneer in hybrid edge cloud computing announced it has joined NVIDIA Inception, a program that nurtures startups revolutionizing industries with technological advancements.

mimik is focused on bringing advanced AI agents and AI-driven workflow capabilities to the edge enabling more efficient, secure, and privacy-preserving computing ecosystems. The company’s flagship product, mimik ai, is a universal cloud-native operating environment for hybrid edge AI agents. With its core edgeEngine and suite of AI microservices, mimik ai provides an offline-first approach that reduces cloud cost and dependency, multi-AI communication, ad hoc service mesh, and enhanced security and privacy marking a significant leap forward in cross-platform hybrid edge AI.

As part of NVIDIA Inception, mimik ai is poised to accelerate hybrid edge AI agent deployment and seamless workflow automation, integrating effortlessly into NVIDIA’s ecosystem to drive intelligent solutions across industries.

“Joining NVIDIA Inception marks a significant milestone in our mission to unleash hybrid edge AI,” said Fay Arjomandi, founder and CEO of mimik. “This program will help empower us to push the boundaries of what’s possible with hybrid edge AI, so we can ultimately deliver more value to our customers and developers across various industries.”

The program also offers mimik the opportunity to collaborate with industry-leading experts and other AI-driven organizations, fostering new product delivery in the hybrid edge AI space. Sam Miri, CRO at mimik, added, “The resources and network provided by NVIDIA Inception align perfectly with our growth strategy. We’re excited to leverage this opportunity to scale our business and bring the benefits of mimik ai operating environment (mimOE) to a wider range of industries, unleashing a new level of collaboration and context-aware intelligence across NVIDIA platforms. This will drive both industry success and our revenue growth.”

NVIDIA Inception helps startups during critical stages of product development, prototyping and deployment. Every Inception member gets a custom set of ongoing benefits, such as NVIDIA Deep Learning Institute credits, preferred pricing on NVIDIA hardware and software, and technological assistance, which provides startups with the fundamental tools to help them grow.

To learn more about mimik ai, visit https://mimik.com and download mimOE.ai at https://developer.mimik.com. Request a demo or contact our sales team today to experience the future of AI integration with mimik ai.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post mimik Joins NVIDIA Inception to Proliferate Hybrid Edge AI Deployments first appeared on AI-Tech Park.

]]>
Nerdio supports Multi Entra ID, boosts reach with strategic partnerships https://ai-techpark.com/nerdio-supports-multi-entra-id-boosts-reach-with-strategic-partnerships/ Wed, 21 Aug 2024 08:45:00 +0000 https://ai-techpark.com/?p=177095 Nerdio, a premier solution for organizations of all sizes looking to manage and cost-optimize native Microsoft cloud technologies, today announced support for multiple Entra ID tenants in its flagship product, Nerdio Manager for Enterprise, allowing organizations to link and manage Azure Virtual Desktop (AVD) deployments that span multiple Entra ID...

The post Nerdio supports Multi Entra ID, boosts reach with strategic partnerships first appeared on AI-Tech Park.

]]>
Nerdio, a premier solution for organizations of all sizes looking to manage and cost-optimize native Microsoft cloud technologies, today announced support for multiple Entra ID tenants in its flagship product, Nerdio Manager for Enterprise, allowing organizations to link and manage Azure Virtual Desktop (AVD) deployments that span multiple Entra ID tenants from a single console. This powcerful new feature streamlines the management of Azure environments by centralizing control, improving visibility, and simplifying administrative tasks for enterprise organizations with multiple Entra ID tenants.

“With Multi-Tenant Instances in Nerdio Manager for Enterprise, our organization is capable of handling complex AVD deployments far more efficiently,” said Garion Brown, Global Vice President of Platform Engineering, Teleperformance. We recommended this feature to Nerdio, and it has provided us with a unified view of all our instances of Azure, significantly reducing the need to switch between different accounts in the Azure portal.”

Nerdio continues its market-leading growth in the first half of 2024, underscoring the increasing demand for its innovative cloud management solutions among enterprises, particularly as organizations look to modernize their cloud end-user computing solutions. New customers such as the British Columbia Lottery Corporation, Leggett & Platt Inc., and Osceola County School District join leading organizations including Make-a-Wish UK, The University of North Florida, Chevron, The Government of Alberta, and Equitable Bank, reaping the benefits of Nerdio Manager for Enterprise. 

“We are seeing tremendous growth in the DaaS market,” said Vadim Vladimirskiy, CEO, Nerdio. “Globally, organizations are increasingly acknowledging the benefits of AVD, offering secure and efficient remote work access while reducing overall expenses. This growth reflects a broader shift towards cloud-based solutions as businesses prioritize flexibility, security, and cost-efficiency in their operations.”

Building on its momentum, Nerdio has expanded its reach through strategic partnerships with Carahsoft and Kyndryl. By teaming up with Carahsoft, Nerdio offers Nerdio Manager for Enterprise to the public sector, leveraging Carahsoft’s extensive network of reseller partners and NASA SEWP V contracts to extend cloud technology solutions to government agencies nationwide. Additionally, the partnership with Kyndryl, the world’s largest IT infrastructure services provider, supports businesses and IT modernization for customers. The collaboration enhances Kyndryl’s capabilities in delivering tailored solutions across Azure Virtual Desktop, Windows 365, and Microsoft Intune to meet customers’ unique environments and business needs.

The industry has taken notice of Nerdio’s continued growth, resulting in several accolades. Nerdio was named the 2024 Microsoft Americas Partner of the Year, recognized in the inaugural CRN AI 100 list, and won Silver in the Stevies for Cloud Application/Service. Nerdio’s CEO, Vadim, was also recognized as Entrepreneur of the Year 2024 for the Midwest region. Further cementing its reputation, Nerdio garnered multiple badges in the G2 Summer rankings, including Momentum Leader, Best Results, Best Support, and Leader in the Cloud VDI and DaaS reports.

To learn more about Nerdio, please visit www.getnerdio.com

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post Nerdio supports Multi Entra ID, boosts reach with strategic partnerships first appeared on AI-Tech Park.

]]>
Cloud Foundry Foundation Announces New Governing Board Chairman https://ai-techpark.com/cloud-foundry-foundation-announces-new-governing-board-chairman/ Wed, 14 Aug 2024 11:00:00 +0000 https://ai-techpark.com/?p=176369 Stephan Klevenz, technical lead at SAP SE, brings longtime advocacy to leadership role Cloud Foundry Foundation today announced that Stephan Klevenz, technical lead, SAP SE, has been named the new chair of its Governing Board. Klevenz succeeds Catherine McGarvey, former vice president of software engineering, VMware.  “Cloud Foundry has been a pioneer in developer...

The post Cloud Foundry Foundation Announces New Governing Board Chairman first appeared on AI-Tech Park.

]]>
Stephan Klevenz, technical lead at SAP SE, brings longtime advocacy to leadership role

Cloud Foundry Foundation today announced that Stephan Klevenz, technical lead, SAP SE, has been named the new chair of its Governing Board. Klevenz succeeds Catherine McGarvey, former vice president of software engineering, VMware.

 “Cloud Foundry has been a pioneer in developer technologies for cloud applications and is experiencing a renaissance with the rapid adoption of Kubernetes and cloud-native computing,” said Klevenz. “We’re pushing ahead with new technologies through projects like Paketo Buildpacks and Korifi that continue making things simpler for developers. Our users consistently tell us that when it comes to Cloud Foundry, ‘It just runs.’ This reliability is at the heart of what we do, and I look forward to working with our community as we continue to innovate in these exciting times.”

The Cloud Foundry Foundation Governing Board is responsible for the oversight and management of the Foundation’s business affairs, property, and interests, while technical decision-making authority is vested in the Foundation’s Technical Oversight Committee and working groups. Other governing board members include representatives from Comcast, SAP, and VMware Tanzu.

“Having Stephan lead our governing board brings a wealth of experience to further enhance the developer experience with virtual machines (VMs) and Kubernetes,” said Chris Clark, program manager for Cloud Foundry Foundation. “Along with this product expertise, Stephan brings great enthusiasm and passion to lead our Governing Board.”

At SAP, Klevenz focuses on Cloud Foundry topics for the SAP Business Technology Platform. His career with SAP extends 20 years where he has worked on various software engineering projects involving open source.

Cloud Foundry is an open source technology backed by the largest tech companies in the world, including IBM, SAP, and VMware, and is being used by leaders in manufacturing, telecommunications, and financial services. Only Cloud Foundry delivers the velocity needed to continuously deliver apps at the speed of business. Cloud Foundry’s container-based architecture runs apps in any language on a choice of cloud platforms — Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, OpenStack, and more. With a robust services ecosystem and simple integration with existing technologies, Cloud Foundry is the modern standard for mission-critical apps for global organizations.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post Cloud Foundry Foundation Announces New Governing Board Chairman first appeared on AI-Tech Park.

]]>
CSA Addresses Using AI for Offensive Security in New Report https://ai-techpark.com/csa-addresses-using-ai-for-offensive-security-in-new-report/ Wed, 07 Aug 2024 14:30:00 +0000 https://ai-techpark.com/?p=175692 Paper explores the unique transformative potential, challenges, and limitations of Large Language Model (LLM)-powered AI in offensive security Black Hat Conference (Las Vegas) –Today, the Cloud Security Alliance (CSA), the world’s leading organization dedicated to defining standards, certifications, and best practices to help ensure a secure cloud computing environment, released Using Artificial Intelligence (AI)...

The post CSA Addresses Using AI for Offensive Security in New Report first appeared on AI-Tech Park.

]]>
Paper explores the unique transformative potential, challenges, and limitations of Large Language Model (LLM)-powered AI in offensive security

Black Hat Conference (Las Vegas) –Today, the Cloud Security Alliance (CSA), the world’s leading organization dedicated to defining standards, certifications, and best practices to help ensure a secure cloud computing environment, released Using Artificial Intelligence (AI) for Offensive Security. The report, drafted by the AI Technology and Risk Working Group, explores the transformative potential of Large Language Model (LLM)-powered AI by examining its integration into offensive security. Specifically, the report addresses current challenges and showcases AI’s capabilities across five security phases: reconnaissance, scanning, vulnerability analysis, exploitation, and reporting.

“AI is here to transform offensive security, however, it’s not a silver bullet. Because AI solutions are limited by the scope of their training data and algorithms, it’s essential to understand the current state-of-the-art of AI and leverage it as an augmentation tool for human security professionals,” said Adam Lundqvist, a lead author of the paper. “By adopting AI, training teams on potential and risks, and fostering a culture of continuous improvement, organizations can significantly enhance their defensive capabilities and secure a competitive edge in cybersecurity.”

Among the report’s key findings:

  • Security teams face a shortage of skilled professionals, increasingly complex and dynamic environments, and the need to balance automation with manual testing.
  • AI, mainly through LLMs and AI agents, offers significant capabilities in offensive security, including data analysis, code and text generation, planning realistic attack scenarios, reasoning, and tool orchestration. These capabilities can help automate reconnaissance, optimize scanning processes, assess vulnerabilities, generate comprehensive reports, and even autonomously exploit vulnerabilities.
  • Leveraging AI in offensive security enhances scalability, efficiency, speed, discovery of more complex vulnerabilities, and ultimately, the overall security posture.
  • While promising, no single AI solution can revolutionize offensive security today. Ongoing experimentation with AI is needed to find and implement effective solutions. This requires creating an environment that encourages learning and development, where team members can use AI tools and techniques to grow their skills.

As outlined in the report, the utilization of AI in offensive security presents unique opportunities but also limitations. Managing large datasets and ensuring accurate vulnerability detection are significant challenges that can be addressed through technological advancements and best practices. However, limitations such as token window constraints in AI models require careful planning and mitigation today. To overcome these challenges, the report’s authors recommend that organizations incorporate AI to automate tasks and augment human capabilities; maintain human oversight to validate AI outputs, improve quality, and ensure technical advantage; and implement robust governance, risk, and compliance frameworks and controls to ensure safe, secure, and ethical AI use.

“While AI offers significant potential to enhance offensive security capabilities, it’s crucial to acknowledge the difficulties that can arise from its use. Putting appropriate mitigation strategies, such as those covered in this report, in place can help ensure AI’s safe and effective integration into security frameworks,” said Kirti Chopra, a lead author of the paper.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post CSA Addresses Using AI for Offensive Security in New Report first appeared on AI-Tech Park.

]]>
Vultr Announces Launch of Industry Cloud https://ai-techpark.com/vultr-announces-launch-of-industry-cloud/ Wed, 07 Aug 2024 09:45:04 +0000 https://ai-techpark.com/?p=175610 New industry-specific capabilities empower organizations with scalable, cost-effective, high-performance AI and cloud solutions worldwide Vultr, the world’s largest privately held cloud computing platform, today announced the launch of its industry cloud solution, which delivers industry-vertical, cutting-edge cloud computing solutions that meet specific industry needs and regulatory requirements across the retail, manufacturing, healthcare,...

The post Vultr Announces Launch of Industry Cloud first appeared on AI-Tech Park.

]]>
New industry-specific capabilities empower organizations with scalable, cost-effective, high-performance AI and cloud solutions worldwide

Vultr, the world’s largest privately held cloud computing platform, today announced the launch of its industry cloud solution, which delivers industry-vertical, cutting-edge cloud computing solutions that meet specific industry needs and regulatory requirements across the retail, manufacturing, healthcare, media, telecommunications, and finance sectors. Leveraging Vultr’s global cloud infrastructure spanning six continents and 32 cloud data center locations – including Vultr Cloud GPU accelerated by NVIDIA GPUs for artificial intelligence (AI) and machine learning (ML) – Vultr industry cloud optimizes specific industry sectors’ infrastructure and operations around the world.

Amid economic and geopolitical uncertainty, digital transformation, and the need for rapid innovation driving cloud adoption across industry sectors, Vultr is the first independent global cloud computing platform to provide enterprises across core sectors with specialized, composable cloud platforms, available in all regions across the globe, tailored to specific industry sectors’ unique AI and digital transformation needs and local regulatory, compliance, and data governance requirements.

Vultr industry cloud is now available across key verticals, including:

  • Retail – enabling enterprises to focus on scalable, flexible infrastructure to handle seasonal demand fluctuations and enhance customer experiences.
  • Manufacturing – prioritizing integration with industrial IoT and real-time data processing for production optimization.
  • Healthcare – out-of-the-box availability of robust security and compliance capabilities to protect sensitive patient data and support healthcare applications.
  • Media and Entertainment – availing high-performance computing resources for content creation, rendering, and distribution.
  • Telecommunications – delivering reliable, low-latency infrastructure to support network services and customer applications.
  • Finance – built-in data security, compliance, and real-time processing capabilities to support financial transactions and analytics.

“While our customers come from a diverse range of industries, they all demand excellence when it comes to their cloud solutions. To achieve this, today’s CIOs and CTOs need scalable, reliable, and industry-specific cloud solutions to accelerate their digital and AI transformation,” said Kevin Cochrane, CMO of Vultr. “The launch of Vultr industry cloud provides the first global cloud alternative to dynamic allocation and high-performance cloud computing, GPUs, storage, and network resources to enhance operational efficiency and deliver continuous operations.”

Vultr offers a cost-effective alternative to traditional infrastructure. Unlike traditional cloud platforms, Vultr’s composable infrastructure integrates end-to-end industry cloud capabilities with its core cloud services components:

  • Software as a Service (SaaS) – Vultr’s easy-to-use infrastructure integrates with SaaS offerings through APIs and strategic cloud alliance partnerships, allowing businesses to efficiently deploy and manage various applications and offering accessibility, scalability, and reduced management overhead.
  • Platform as a Service (PaaS) – Providing a robust development and deployment cloud environment, and enabling developers to build, test, and deploy applications without managing the underlying infrastructure, Vultr integrates cloud computing with PaaS, offering tools for streamlined app development, deployment, and multi-cloud flexibility.
  • Infrastructure as a Service (IaaS) – Vultr’s IaaS offerings include scalable bare metal, Cloud GPU, virtual machines, storage solutions, networking services, containers, and database management, providing flexible infrastructure for various workloads.
  • Data Fabrics – Vultr represents data fabric through its scalable cloud infrastructure, unifying data management across diverse environments. With services like compute instances, block storage, and managed databases, Vultr enables seamless data integration and efficient governance.
  • Marketplaces and App Stores – Vultr Marketplace features a wide array of pre-configured applications and services, simplifying the process of finding and deploying cloud solutions.
  • Compliance and Security – Vultr offers robust security features and compliance tools, ensuring data protection and adherence to regional and industry-specific regulations.
  • Edge Computing – Vultr’s edge computing solutions bring computing power closer to the data source, reducing latency and enhancing performance for real-time applications.

“Vultr industry cloud reaffirms our commitment to supporting enterprises, giving them the adaptability they need to cope with accelerating industry disruptions and unique requirements,” Cochrane added. “With tailored, industry-specific cloud capabilities, our customers can accelerate digital transformation and achieve differentiation faster than the competition.”

Learn more about Vultr industry cloud and contact sales to get started.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post Vultr Announces Launch of Industry Cloud first appeared on AI-Tech Park.

]]>
Next-Gen Ampere® Instances Available on Oracle Cloud Infrastructure https://ai-techpark.com/next-gen-ampere-instances-available-on-oracle-cloud-infrastructure/ Tue, 06 Aug 2024 10:30:00 +0000 https://ai-techpark.com/?p=175461 New OCI Ampere A2 Compute instances powered by AmpereOne® deliver best in class price-per-performance. Ampere and Oracle Cloud Infrastructure (OCI) announced today the launch of second-generation Ampere-based compute instances, OCI Ampere A2 Compute, based on the AmpereOne® family of processors. The new offering builds upon the success of OCI Ampere A1 Compute instances,...

The post Next-Gen Ampere® Instances Available on Oracle Cloud Infrastructure first appeared on AI-Tech Park.

]]>
New OCI Ampere A2 Compute instances powered by AmpereOne® deliver best in class price-per-performance.

Ampere and Oracle Cloud Infrastructure (OCI) announced today the launch of second-generation Ampere-based compute instances, OCI Ampere A2 Compute, based on the AmpereOne® family of processors. The new offering builds upon the success of OCI Ampere A1 Compute instances, which have been adopted by OCI customers and deployed across over 100 OCI services, including Oracle Database services like HeatWave, MySQL, as well as Oracle Cloud Applications.

OCI Ampere A2 Compute instances provide higher core count virtual machines and high-density containers within a single server, delivering more performance, scalability, and cost-efficiency for cloud native workloads. In addition, the OCI Ampere A2 Compute series further extends OCI’s lead in both Arm-based cloud computing and price-for-performance.

“OCI and Ampere began our collaboration with the ground-breaking A1 shapes. We’ve demonstrated the versatility of these shapes on a wide range of workloads from general purpose applications and OCI services to the most recently announced and highly demanding use case: Llama3 generative AI services,” said Jeff Wittich, Chief Product Officer at Ampere. “Building on this momentum, the new OCI Ampere A2 Compute shapes using our AmpereOne® processors are set to create a new baseline in price-performance for the cloud industry across an ever-expanding variety of cloud native workloads and instance types.” 

Key features, pricing and performance metrics for the OCI Ampere A2 Compute instances include:

  • Up to 78 OCPUs (1 OCPU = 2 AmpereOne® cores, 156 cores total)
  • Up to 946 GB of DDR5 memory with 25% more bandwidth compared to A1
  • Flexible VM sizes, with up to 946 GB of memory, and block storage boot volumes up to 32TB
  • Networking bandwidth of up to 78 Gbps Ethernet and up to 24 VNICs
  • Testing shows that our customers may see up to 2X better price performance vs comparable x86-based shapes
  • Leadership Arm-based cloud compute pricing at $0.014/OCPU/hour, and $0.002/GB/hour
  • Oracle’s Flex Shapes allow customers to tune the core count of their shape based on their actual workloads for even more savings

Like OCI Ampere A1 instances, OCI Ampere A2 Compute instances show strong performance for multiple AI functions, including generative AI. This performance is made possible through joint development efforts between Ampere Computing and Oracle Cloud Infrastructure (OCI) that have recently delivered up to 152% performance gain over the previous upstream llama.cpp open-source implementation.

Beyond AI, OCI Ampere A1 and A2 Compute instances are also well-suited for other cloud native workloads, such as analytics and databases, media services, video streaming, and web services. They offer the linear scalability, low latency, density and predictable performance these workloads need, bringing more performance and higher cost savings. For example, when deploying a typical web service on OCI Ampere A2 Compute, using very popular applications such as MySQL, NGINX, Cassandra, and Redis, the savings can be very compelling. An enterprise spending $50M annually across a weighted blend of these popular web service components could save up to $21.4M in cloud infrastructure costs compared with OCI E5 x86 based shapes. The result can save up to 43% less in infrastructure costs, 30% reduction in power consumption, and 33% less carbon emissions.

OCI Ampere A1 and A2 Compute shapes represent a significant advancement in cloud computing by lowering general purpose cloud computing costs, addressing AI computing efficiency, providing a more predictable and linearly scalable compute resource, and helping companies achieve ESG goals faster. Visit the OCI Ampere Compute website to get started.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post Next-Gen Ampere® Instances Available on Oracle Cloud Infrastructure first appeared on AI-Tech Park.

]]>