Gesture Control - AI-Tech Park https://ai-techpark.com AI, ML, IoT, Cybersecurity News & Trend Analysis, Interviews Tue, 27 Aug 2024 11:47:48 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.16 https://ai-techpark.com/wp-content/uploads/2017/11/cropped-ai_fav-32x32.png Gesture Control - AI-Tech Park https://ai-techpark.com 32 32 J.D. Power 2024 U.S. Tech Experience Index (TXI) Study https://ai-techpark.com/j-d-power-2024-u-s-tech-experience-index-txi-study/ Tue, 27 Aug 2024 08:59:59 +0000 https://ai-techpark.com/?p=177683 Genesis Ranks Highest Overall for Tech Innovation for Fourth Consecutive Year; Hyundai Ranks Highest among Mass Market Brands for Fifth Consecutive Year Are vehicle owners becoming overwhelmed with technology features that don’t solve a problem, don’t work, are difficult to use or are just too limited in functionality? The results of the J.D. Power 2024...

The post J.D. Power 2024 U.S. Tech Experience Index (TXI) Study first appeared on AI-Tech Park.

]]>
Genesis Ranks Highest Overall for Tech Innovation for Fourth Consecutive Year; Hyundai Ranks Highest among Mass Market Brands for Fifth Consecutive Year

Are vehicle owners becoming overwhelmed with technology features that don’t solve a problem, don’t work, are difficult to use or are just too limited in functionality? The results of the J.D. Power 2024 U.S. Tech Experience Index (TXI) Study,SM released today, suggest that could be the case. The study, which focuses on the user experience with advanced vehicle technologies as they come to market, finds that while owners offer praise for some advanced features, others are found to be lackluster.

New Artificial Intelligence (AI)-based technologies, like smart climate control, have quickly won popularity with those owners who have used it, yet recognition technologies such as facial recognition, fingerprint reader and interior gesture controls fall out of favor as they unsuccessfully try to solve a problem that owners didn’t know they had. For example, not only do owners say that interior gesture controls can be problematic (43.4 problems per 100 vehicles), but 21% of these owners also say this technology lacks functionality, according to newly added diagnostic questions in this year’s study. These performance metrics, including a lack of perceived usefulness, result in this technology being considered a lost value for any automaker that has invested millions of dollars to bring it to market.

To assist in solving this problem, J.D. Power has developed a return on investment (ROI) analysis as part of the TXI findings to use advanced data science to cluster individual technologies into three categories. The categorization of technologies—must have, nice to have and not necessary—provide automakers the ability to better align their contenting strategy with customer expectations.

“A strong advanced tech strategy is crucial for all vehicle manufacturers, and many innovative technologies are answering customer needs,” said Kathleen Rizk, senior director of user experience benchmarking and technology at J.D. Power. “At the same time, this year’s study makes it clear that owners find some technologies of little use and/or are continually annoying. J.D. Power’s ability to calculate the return on investment for individual technologies is a major step in enabling carmakers to determine the technologies that deserve the most attention while helping them ease escalating costs for new vehicles.”

Following are some of the key findings of the 2024 study:

  • Drivers still prefer hands-on tech—hands down: Despite the increasing availability of advanced driver assistance systems (ADAS), many owners remain indifferent to their value. Most owners appreciate features that directly address specific concerns, such as visual blind spots while backing up. However, other ADAS features often fall short, with owners feeling capable of handling tasks without them. This is particularly evident with active driving assistance, as the hands-on-the-wheel version ranks among the lowest-rated ADAS technologies with a low perceived usefulness score (7.61 on a 10-point scale). The hands-free, more advanced version of this tech does not significantly change the user experience as indicated by a usefulness score of 7.98, which can be attributed to the feature not solving a known problem.
  • Owners don’t see value in passenger screens: Automakers are expanding their offering of vehicles containing a passenger display screen despite the feature being classified as “not necessary” by vehicle owners. The tech is negatively reviewed by many owners who point to usability issues. Perhaps the technology would be viewed more favorably if the front passenger seat was used more frequently, but only 10% of vehicles carry front-seat passengers daily. Furthermore, the addition of a second screen adds to the complexity of the vehicle delivery process as it is difficult for dealers to teach new owners how to use the primary infotainment screen, let alone a second one.
  • Tesla might be losing its tech edge: Historically, Tesla owners have expressed enthusiasm for the brand’s technology and rated their vehicles highly, often overlooking quality concerns. However, as Tesla’s customer base expands beyond tech-hungry early adopters, this trend is waning as this year’s results show a shift to lower satisfaction across some problematic techs such as direct driver monitoring (score of 7.65).

Highest-Ranking Brands

Genesis ranks highest overall and highest among premium brands for innovation for a fourth consecutive year, with a score of 584 (on a 1,000-point scale). In the premium segment, Lexus (535) ranks second and BMW (528) ranks third.

Hyundai ranks highest among mass market brands for innovation for a fifth consecutive year, with a score of 518. Kia (499) ranks second and GMC (439) ranks third.

Advanced Technology Award Recipients

The U.S. Tech Experience Index (TXI) Study analyzes 40 automotive technologies, which are divided into four categories: convenience; emerging automation; energy and sustainability; and infotainment and connectivity. Only the 31 technologies classified as advanced are award eligible.

  • Toyota Sequoia is the mass market model receiving the convenience award for its camera rear-view mirror technology. The premium segment in this category is not award eligible.
  • Genesis GV70 is the premium model receiving the emerging automation award for front cross traffic warning. Kia Carnival is the mass market model receiving the emerging automation award, also for front cross traffic warning.
  • BMW iX receives the award for energy and sustainability in the premium segment for one-pedal driving. The mass market segment in the energy and sustainability category is not award eligible.
  • BMW X6 receives the award for infotainment and connectivity in the premium segment for phone-based digital key. Hyundai Santa Fe receives the award for infotainment and connectivity in the mass market segment, also for phone-based digital key.

The 2024 U.S. Tech Experience Index (TXI) Study is based on responses from 81,926 owners of new 2024 model-year vehicles who were surveyed after 90 days of ownership. The study was fielded from July 2023 through May 2024 based on vehicles registered from April 2023 through February 2024.

The U.S. Tech Experience Index (TXI) Study complements the annual J.D. Power U.S. Initial Quality Study SM (IQS) and the J.D. Power U.S. Automotive Performance, Execution and Layout (APEAL) Study SM by measuring how effectively each automotive brand brings new technologies to market. The U.S. Tech Experience Index (TXI) Study combines the level of adoption of new technologies for each brand with excellence in execution. The execution measurement examines how much owners like the technologies and how many problems they experience while using them.

For more information about the U.S. Tech Experience Index (TXI) Study, visit https://www.jdpower.com/business/us-tech-experience-index-txi-study.

See the online press release at http://www.jdpower.com/pr-id/2024091.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

The post J.D. Power 2024 U.S. Tech Experience Index (TXI) Study first appeared on AI-Tech Park.

]]>
WiMi Hologram Cloud developed multichannel VR interaction system https://ai-techpark.com/wimi-hologram-cloud-developed-multichannel-vr-interaction-system/ Fri, 30 Jun 2023 14:05:00 +0000 https://ai-techpark.com/?p=127412 WiMi Hologram Cloud Inc. (NASDAQ: WIMI) (“WiMi” or the “Company”), a leading global Hologram Augmented Reality (“AR”) Technology provider, today announced the development of a multichannel virtual reality interaction system. A multichannel interaction system is a collaborative approach combining two or more input channels (e.g., voice, video, haptics, and gestures) in...

The post WiMi Hologram Cloud developed multichannel VR interaction system first appeared on AI-Tech Park.

]]>
WiMi Hologram Cloud Inc. (NASDAQ: WIMI) (“WiMi” or the “Company”), a leading global Hologram Augmented Reality (“AR”) Technology provider, today announced the development of a multichannel virtual reality interaction system. A multichannel interaction system is a collaborative approach combining two or more input channels (e.g., voice, video, haptics, and gestures) in one system, fully using different human sensory channels to make the interaction more natural and effective. In a multichannel interaction system, users can use natural interaction methods such as voice, gestures, eyes, expressions, lip movements, etc., to work collaboratively with the computer system. Both humans and computers are active participants in the exchange of information. There are various ways of serial/parallel and complementary/independent between input channels. Human-computer interaction converges to the form of human-human interaction, which substantially improves the naturalness and efficiency of interaction, and this will be the mainstream form of virtual reality human-computer interaction in the future.

The use of multichannel interaction in virtual reality has apparent advantages. It reduces coupling and the cognitive load on the user, significantly improves the recognition rate of inputs, and provides the user with flexible input methods to enhance interaction efficiency.

The system enables users to interact simultaneously using different channels based on voice, posture, or haptic input. In addition, things like facial expression recognition or lip reading are also used for multichannel input. Multichannel interfaces can combine the advantages of individual channels or switch channels depending on the context of the environment. Since multichannel technology fuses input streams from multiple channels, using multichannel interaction technology in virtual reality can significantly improve system control performance. And there are two main ways of multichannel fusion, feature fusion, and semantic fusion. Feature fusion is based on the original input data fused at the signal level, and this approach is applicable when the connected channels are tightly coupled. Semantic fusion is the process of mapping input data into semantic interpretations, acquiring the input information stream from the input channels, and constructing a unified data representation through preliminary pre-processing.

Human-computer interaction is the interactive relationship between the system and the user, which uses the language of dialogue between humans and computers to complete the information exchange process between humans and computers in some interactive way. In HCI, the natural interaction behavior of humans and the state change of physical space are multichannel patterns.

Deep learning will make the established system more intelligent for HCI in VR scenes, whether speech recognition, emotion recognition, or human-computer dialogue. WiMi’s interaction system can enhance the ability of computer models to recognize, classify and analyze confusing behaviors. It leads the HCI mode in VR to develop gradually towards intelligence, humanization, and scenario and builds a harmonious and natural virtual reality human-computer environment.

Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!

The post WiMi Hologram Cloud developed multichannel VR interaction system first appeared on AI-Tech Park.

]]>
Grupo Antolin & Uniphy to Develop next gen in-car user-interfaces https://ai-techpark.com/grupo-antolin-uniphy-to-develop-next-gen-in-car-user-interfaces/ https://ai-techpark.com/grupo-antolin-uniphy-to-develop-next-gen-in-car-user-interfaces/#respond Thu, 17 Mar 2022 16:00:00 +0000 https://ai-techpark.com/?p=64121 Grupo Antolin, global supplier of technological solutions for car interiors, and Uniphy, the leading provider of 3D smart-surface technology, have signed an agreement to collaborate on next generation in-car user-interfaces. Thanks to this collaboration, Grupo Antolin will combine its advanced decorative and lighting technologies with Uniphy’s revolutionary Canvya™ smart-surface solutions to enable new...

The post Grupo Antolin & Uniphy to Develop next gen in-car user-interfaces first appeared on AI-Tech Park.

]]>
Grupo Antolin, global supplier of technological solutions for car interiors, and Uniphy, the leading provider of 3D smart-surface technology, have signed an agreement to collaborate on next generation in-car user-interfaces. Thanks to this collaboration, Grupo Antolin will combine its advanced decorative and lighting technologies with Uniphy’s revolutionary Canvya™ smart-surface solutions to enable new highly functional 3D touch-control surfaces that are beautiful, intuitive, robust, safe, and economic.

The resulting HMIs (Human Machine Interface) will take advantage of the unparalleled design freedom to deliver high-performance touch, touch contours such as longitudinal or circular sliders, concave/convex touch-surface dials, touch-gesture, and proximity recognition, together with the integration of displays and advanced lighting solutions.

Grupo Antolin’s extensive expertise and state-of-the-art car interior technology, together with its integration experience of 3rd party solutions into its products, will be paired with Uniphy’s revolutionary Canvya™ 3D smart-surface technology to deliver user interfaces with great added value.

Unlike standard capacitive surfaces, Uniphy’s patented technology uses novel optical sensing with a free-form and affordable three-layer optical laminate, powered by low power electronics to improve the quality, functionality, and performance of the solution. The collaboration is set to create a completely new touch interface that takes the user experience to a whole new level.

“Uniphy and Antolin are a perfect fit. Together we can produce unique smart 3D surfaces that can be seamlessly integrated into all interior parts of next generation vehicles. As part of this program, Uniphy will collaborate closely with our Lighting & HMI Business Unit as well as Walter Pack, Antolin’s strategic partner in films and decorative surfaces,” said Marta Cuevas, Grupo Antolin Lighting & HMI Business Unit Director.

“Antolin & Uniphy have the combined expertise to make decorative trim immersivity functional in a highly efficient and cost-effective way. The compatibilities of our capabilities, as well as a shared vision for enabling peerless user experiences, augurs well for realising smart surfaces that will evoke, in end users, passion for using the resulting products,” said Jim Nicholas, Uniphy Limited Chief Executive Officer.

As part of its strategy to consolidate its position as a global supplier of technological and innovative solutions, Grupo Antolin strives to integrate new technologies in the vehicle interior, from HMI functions to advanced driver-assistance systems, functional lighting, and smart surfaces with the highest perceived quality. The company focuses on helping OEMs to develop a more advanced, technological, and sustainable automotive interior that offers passengers a unique onboard experience.

Uniphy’s revolutionary 3D smart-surface solution combines novel algorithms and patented technologies to allow standard materials/components and mainstream manufacturing processes to be deployed to deliver feature-rich and freeform 3D smart interfaces. The Uniphy solution truly transforms product design. It enables designers to freely create HMIs that are striking as well as experiences that are intuitive and natural, while also delivering robustness and remaining cost-effective. The technology goes “Beyond Touch™” and unifies non-conductive, finger pressure sensitive touch sensing with the integration of physical HMI features including dials, buttons & sliders. It also supports haptic feedback, proximity and touch-gesture recognition whilst also being able to host additional proprietary HMI features.

For more such updates and perspectives around Digital Innovation, IoT, Data Infrastructure, AI & Cybersecurity, go to AI-Techpark.com.

The post Grupo Antolin & Uniphy to Develop next gen in-car user-interfaces first appeared on AI-Tech Park.

]]>
https://ai-techpark.com/grupo-antolin-uniphy-to-develop-next-gen-in-car-user-interfaces/feed/ 0
Global Gesture Recognition Market to Reach $24.8 Billion by 2026 https://ai-techpark.com/global-gesture-recognition-market-to-reach-24-8-billion-by-2026/ https://ai-techpark.com/global-gesture-recognition-market-to-reach-24-8-billion-by-2026/#respond Mon, 21 Feb 2022 16:45:00 +0000 https://ai-techpark.com/?p=59758 A new market study published by Global Industry Analysts Inc., (GIA) the premier market research company, today released its report titled “Gesture Recognition – Global Market Trajectory & Analytics”. The report presents fresh perspectives on opportunities and challenges in a significantly transformed post COVID-19 marketplace. FACTS AT A GLANCEWhat’s New for...

The post Global Gesture Recognition Market to Reach $24.8 Billion by 2026 first appeared on AI-Tech Park.

]]>
A new market study published by Global Industry Analysts Inc., (GIA) the premier market research company, today released its report titled “Gesture Recognition – Global Market Trajectory & Analytics”. The report presents fresh perspectives on opportunities and challenges in a significantly transformed post COVID-19 marketplace.

FACTS AT A GLANCE
What’s New for 2022?

  • Global competitiveness and key competitor percentage market shares
  • Market presence across multiple geographies – Strong/Active/Niche/Trivial
  • Online interactive peer-to-peer collaborative bespoke updates
  • Access to our digital archives and MarketGlass Research Platform
  • Complimentary updates for one year

Edition: 17; Released: February 2022
Executive Pool: 38036
Companies: 76 – Players covered include Apple Inc.; Cipia; Cognitec Systems GmbH; Elliptic Laboratories A/S; ESPROS Photonics Corporation; German Autolabs; GestureTek; Google LLC; HID Global Corporation; Infineon Technologies AG; Intel Corporation; iProov Ltd.; IrisGuard UK Ltd; Microchip Technology Inc.; Microsoft Corporation; OmniVision Technologies Inc.; OMRON Corporation; pmdtechnologies ag; PointGrab Inc.; Qualcomm Technologies, Inc.; Sony Depthsensing Solutions; Toposens GmbH; Ultraleap; XYZ Interactive and Others.
Coverage: All major geographies and key segments
Segments: Technology (Touch-Based, Touchless); End-Use (Consumer Electronics, Automotive, Healthcare, Other End-Uses)
Geographies: World; USA; Canada; Japan; China; Europe; France; Germany; Italy; UK; Rest of Europe; Asia-Pacific; Rest of World.

Complimentary Project Preview – This is an ongoing global program. Preview our research program before you make a purchase decision. We are offering a complimentary access to qualified executives driving strategy, business development, sales & marketing, and product management roles at featured companies. Previews provide deep insider access to business trends; competitive brands; domain expert profiles; and market data templates and much more. You may also build your own bespoke report using our MarketGlass™ Platform which offers thousands of data bytes without an obligation to purchase our report. Preview Registry

ABSTRACT-

Global Gesture Recognition Market to Reach US$24.8 Billion by the Year 2026

Gesture recognition refers to an interface between man and machine, interprets human gestures through mathematical algorithms, thereby bringing unchallenged ease and comfort for users while accessing a range of controls and product features on a machine. Gesture recognition allows users to interact with machines through hand and finger gesture movements without physically touching digital devices. The technology is gaining prominence among consumers and original equipment manufacturers or OEMs because of increased user convenience while handling laptops, navigation devices, personal computers, smartphones, among others. Manufacturers are focusing on innovation to add gesture recognition features in different consumer electronics, which in turn, has improved safety, reliability, and convenience. By combining image processing and computer vision, the technology executes commands through gestures. The demand for touchless sensing is attributed to increasing demand for superior user experience, ease of use, and rising digitization across several sectors. Hygiene concerns owing to the COVID-19 pandemic have significantly boosted growth in the market. Escalating demand from various end-use sectors like banking and finance along with automotive which is witnessing a staggering increase in connectivity demand is likely to further augment growth in the market. The high accuracy of next-generation systems is likely to drive a substantial increase in touchless sensing demand in the years ahead.

Amid the COVID-19 crisis, the global market for Gesture Recognition estimated at US$11.8 Billion in the year 2022, is projected to reach a revised size of US$24.8 Billion by 2026, growing at a CAGR of 17% over the analysis period. Touch-Based, one of the segments analyzed in the report, is projected to grow at a 12.2% CAGR to reach US$12 Billion by the end of the analysis period. After a thorough analysis of the business implications of the pandemic and its induced economic crisis, growth in the Touchless segment is readjusted to a revised 21.8% CAGR for the next 7-year period. This segment currently accounts for a 44% share of the global Gesture Recognition market. A large number of companies across industries are making efforts to retrofit existing touch-based interfaces to touchless. The touchless human machine interface technology holds numerous benefits, including safety. In the coming years, touchless HMI technologies are anticipated to register significant uptake and provide lucrative opportunities for market participants.

The U.S. Market is Estimated at $2.7 Billion in 2022, While China is Forecast to Reach $5.2 Billion by 2026

The Gesture Recognition market in the U.S. is estimated at US$2.7 Billion in the year 2022. The country currently accounts for a 23.4% share in the global market. China, the world’s second largest economy, is forecast to reach an estimated market size of US$5.2 Billion in the year 2026 trailing a CAGR of 19.3% through the analysis period. Among the other noteworthy geographic markets are Japan and Canada, each forecast to grow at 15.1% and 16.8% respectively over the analysis period. Within Europe, Germany is forecast to grow at approximately 15.3% CAGR while Rest of European market (as defined in the study) will reach US$1.5 Billion by the end of the analysis period.

Proliferation of AI and ML technologies is causing paradigm shifts in the way we experience the world. Gesture recognition unfurls a spectrum of feature-enriching applications in smart navigation, consumer electronics, healthcare delivery, augmented reality gaming, automation of homes, live video streaming and virtual shopping. AI-enabled gestural recognition rendered robotic surgeries to be precise and dependable along with accurate and easier health monitoring. Advent of wearable sensors further fortified the utility of AI-based gesture recognition systems by enhancing the sensitivity through the integration of visual and somatosensory data almost identical to the skin. Propped by consumer electronics, gestural recognition technology made a steady progress and gained customer acclaim in the recent years and is all set to make rapid strides in design, education, security and real estate fields. Technology leaders Apple, Microsoft, Intel and Google dominate the market with their innovative gesture recognition based human machine interaction interfaces that are widely used across board.

AI systems utilize Time-of-Flight principle (ToF) and in-built data libraries for Static Gesture Recognition to identify and track gestures. Sensors track the body movements by gauging the intermediary distance with the object at various slices of time and layer it as a signal with reference to maximized range and recorded motion frequency. Consumer electronics market is abuzz with innovation spree powered by gesture recognition and haptic feedback. Limix, an Italian company, developed a gesture to text conversion solution that identifies gestures in the sign language format and converts them to machine readable texts and audio captions optimized for playing on smart mobiles. uSens’ HGR solution enables smart TVs to track hand gestures and finger movements. Automated homes also deploy gesture recognition largely for actionable commands. Gestoo developed an AI based HGR solution for contactless audio system and lighting monitoring through smart phone interface. More

MarketGlass™ Platform
Our MarketGlass™ Platform is a free full-stack knowledge center that is custom configurable to today`s busy business executive`s intelligence needs! This influencer driven interactive research platform is at the core of our primary research engagements and draws from unique perspectives of participating executives worldwide. Features include – enterprise-wide peer-to-peer collaborations; research program previews relevant to your company; 3.4 million domain expert profiles; competitive company profiles; interactive research modules; bespoke report generation; monitor market trends; competitive brands; create & publish blogs & podcasts using our primary and secondary content; track domain events worldwide; and much more. Client companies will have complete insider access to the project data stacks. Currently in use by 67,000+ domain experts worldwide.

Our platform is free for qualified executives and is accessible from our website www.StrategyR.com or via our just released mobile application on iOS or Android

Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!

The post Global Gesture Recognition Market to Reach $24.8 Billion by 2026 first appeared on AI-Tech Park.

]]>
https://ai-techpark.com/global-gesture-recognition-market-to-reach-24-8-billion-by-2026/feed/ 0
AMTRON, Corsight AI & I-Sec partner to deliver facial recognition https://ai-techpark.com/amtron-corsight-ai-i-sec-partner-to-deliver-facial-recognition/ https://ai-techpark.com/amtron-corsight-ai-i-sec-partner-to-deliver-facial-recognition/#respond Tue, 17 Aug 2021 09:00:00 +0000 https://ai-techpark.com/?p=36691 Assam Electronics Development Corporation Limited (AMTRON) has signed a tripartite MOU with Corsight AI, a leading Facial Recognition Technology provider, to establish a strong Facial Recognition Technology Development and Services portfolio along with setting up a Facial Recognition Center of Excellence (“FR-COE”) at Tech City, Guwahati. I-Sec will be a strategic partner...

The post AMTRON, Corsight AI & I-Sec partner to deliver facial recognition first appeared on AI-Tech Park.

]]>
Assam Electronics Development Corporation Limited (AMTRON) has signed a tripartite MOU with Corsight AI, a leading Facial Recognition Technology provider, to establish a strong Facial Recognition Technology Development and Services portfolio along with setting up a Facial Recognition Center of Excellence (“FR-COE”) at Tech City, Guwahati. I-Sec will be a strategic partner of AMTRON in this venture. I-Sec along with its global partners, is exclusively offering state of the art emerging technologies and solutions across all sectors from defence to space technologies.

With this collaboration, AMTRON aspires to provide Facial Recognition services, capacity development, research and skilling services to the Government of India, State Governments and allied government organizations and Public Sector Undertakings (“PSUs”) within India, against rapidly evolving demand for Facial Recognition Technology.

Commenting on the relationship, Mr. Rob Watts, CEO of Corsight AI, said, “We are proud to be working with AMTRON and I-Sec, as we roll out the most advanced Facial Recognition Technology across the nation, to protect the public from harm and improve security and safety. Corsight AI software, which received top rankings by NIST and the Department of Homeland Security, is recognised as the market leader for both speed and accuracy, but also for privacy and ethical standards. We look forward to building a long-term relationship with these dynamic organisations. Our first task will be to build the Innovation Center at AMTRON’S prestigious offices and to engage with the client community to deliver real value and outcomes.”

MD, AMTRON, Shri M.K. Yadava, IFS MD, AMTRON said, “In Government, there are lots of privacy related issues which need to be addressed. AMTRON has collaborated with Corsight to mitigate all of these issues. The defence and aviation sector of India could highly benefit from this alliance and we could see a major change in secured data privacy models in near future with integration of such robust technologies.”

“I-Sec with AMTRON, as its eminent partner, aims towards the comprehensive and strategic development of Human Capital Intelligence, Training, Innovation Center across sectors, is just the beginning of a very exciting journey, quotes Ms. Amita Singh, MD, I-Sec.

For more such updates and perspectives around Digital Innovation, IoT, Data Infrastructure, AI & Cybsercurity, go to AI-Techpark.com.

The post AMTRON, Corsight AI & I-Sec partner to deliver facial recognition first appeared on AI-Tech Park.

]]>
https://ai-techpark.com/amtron-corsight-ai-i-sec-partner-to-deliver-facial-recognition/feed/ 0
AI Facial Recognition Tech Leader CyberLink Launches PowerDVD 21 https://ai-techpark.com/ai-facial-recognition-tech-leader-cyberlink-launches-powerdvd-21/ https://ai-techpark.com/ai-facial-recognition-tech-leader-cyberlink-launches-powerdvd-21/#respond Fri, 16 Apr 2021 08:15:00 +0000 https://ai-techpark.com/?p=20379 With digital media content consumption on the rise across devices, the world’s no. 1 multimedia player delivers the seamless playback and streaming experience that today’s consumers expect and demand. CyberLink Corp. (5203.TW), today announced the release of PowerDVD 21, the latest version of its award-winning movie and media playback software that allows...

The post AI Facial Recognition Tech Leader CyberLink Launches PowerDVD 21 first appeared on AI-Tech Park.

]]>
With digital media content consumption on the rise across devices, the world’s no. 1 multimedia player delivers the seamless playback and streaming experience that today’s consumers expect and demand.

CyberLink Corp. (5203.TW), today announced the release of PowerDVD 21, the latest version of its award-winning movie and media playback software that allows users to watch 8K, 4K HDR Blu-ray, and a wide range of media formats across mobile phones, tablets, TVs, PCs and laptops.

PowerDVD 21 builds on CyberLink’s cutting-edge media technology leadership and offers an even more complete and intuitive user interface, as well as the most comprehensive cloud-based viewing and streaming experience for seamless enjoyment of content anywhere, in the best possible quality. An integrated and easy-to-use media manager automatically organizes the entire library of photos, videos, music, as well as Blu-ray and DVD movies, making PowerDVD a component essential to any home theater media center.

Offering a cross-device media experience has never been more essential. Over 80% of consumers now have two or more connected devices – with a 30% increase in time spent using video streaming apps on mobile. PowerDVD 21 has integrated and optimized the entire mobile experience to meet these changing video viewing habits. By syncing with CyberLink Cloud, PowerDVD 21 offers a complete cross-device, home theater experience where users can pause media on one device and continue watching on other PCs or smartphones right where they left off. PowerDVD 21 also extends content access to friends and family by generating shareable hyperlinks that allow playback of the content on their browser. PowerDVD 21 is the perfect choice for groups of people wishing to build a shared cloud-streaming collection and is ideal for increasingly popular virtual movie viewing parties. PowerDVD 21 also offers integrated streaming through Apple TV & Amazon Fire TV.

“Recognized as the world’s number one movie and media player, PowerDVD has a long history of pioneering multimedia playback features and functionalities, to the benefit of our millions of users,” said Dr. Jau Huang, CEO of CyberLink. “With the further optimization and integration into CyberLink Cloud and enhancements to an already user-friendly UI, we are proud to announce that PowerDVD 21 is more than just a Blu-ray and DVD player. It is the ultimate solution to every home entertainment need.”

In addition to offering a new cross-device playback and sharing experience, PowerDVD 21 benefits from a number of cutting-edge features such as a comprehensive information and artwork database, and unique AI-based facial recognition capabilities. All whilst continuing to guarantee better-than-original audio and video playback for Ultra HD Blu-ray, Blu-ray, and DVD, as well as major file formats, codecs and 360˚ video. PowerDVD 21 users can also enjoy spatial audio support for 360˚ videos, making VR video feel as realistic as being there in person. PowerDVD 21 is available in a variety of subscription and perpetual options.

CyberLink introduced PowerPlayer 365 in 2020, a streamlined alternative to PowerDVD for users who do not need to play DVDs and Blu-ray disks. In 2021, it now offers enhanced playback support for 8K and 4K media, including YouTube videos. Furthermore, with TrueTheater® Enhancements, PowerPlayer365 can provide users with a cinema-level home theater experience for all their movies and content.

The post AI Facial Recognition Tech Leader CyberLink Launches PowerDVD 21 first appeared on AI-Tech Park.

]]>
https://ai-techpark.com/ai-facial-recognition-tech-leader-cyberlink-launches-powerdvd-21/feed/ 0
SAFR Facial Recognition Integrates with Geutebrück VMS https://ai-techpark.com/safr-facial-recognition-integrates-with-geutebruck-vms/ https://ai-techpark.com/safr-facial-recognition-integrates-with-geutebruck-vms/#respond Tue, 13 Apr 2021 15:00:00 +0000 https://ai-techpark.com/?p=19819 Featuring real-time, automated, low-bias identification of opt-in staff and persons of interest SAFR from RealNetworks, Inc. (NASDAQ: RNWK) today announced that its SAFR facial recognition system for live video is now integrated with the Geutebrück G-Core VMS (Video Management System). SAFR for Geutebrück is an AI layer that runs on...

The post SAFR Facial Recognition Integrates with Geutebrück VMS first appeared on AI-Tech Park.

]]>
Featuring real-time, automated, low-bias identification of opt-in staff and persons of interest

SAFR from RealNetworks, Inc. (NASDAQ: RNWK) today announced that its SAFR facial recognition system for live video is now integrated with the Geutebrück G-Core VMS (Video Management System). SAFR for Geutebrück is an AI layer that runs on top of the G-Core VMS which provides advanced video analytics that save time and increase efficiency of surveillance operations. The best-in-class integration features live video overlays that display event details, streamlined enrollment of individuals appearing on the Geutebrück VMS directly into the SAFR identity database, and custom alarms and notifications that notify security personnel of SAFR events directly within the VMS.

With so many cameras deployed, it’s impossible for security staff to monitor them effectively. SAFR matches faces appearing in live video feeds against watchlist images more effectively (99.87%), and with less bias, than humans. This enables security personnel to prioritize feeds that require review while providing them the key information they need to respond to persons of interest more quickly. SAFR also recognizes individuals wearing masks with remarkable accuracy (98.85%). The enrolled or reference image is displayed side by side with the face detected in the VMS video. Operators have instant access to the enrolled person’s face image to confirm match events.

“Manual monitoring is expensive and inefficient. AI can perform real-time, automated identification of persons of interest, and identify previous offenders the moment they return and before they cause new incidents,” said Brad Donaldson, VP, Computer Vision & GM, SAFR. “Our powerful API and plugin architecture makes industry leading integrations such as the one achieved with Geutebrück possible.”

The tight integration enables operators to automatically enroll faces into the SAFR database by simply drawing a marquee around a face in the Geutebrück G-Core VMS. Operators can use SAFR’s information overlays within the VMS video feeds, making it easy to quickly and accurately separate unknown people and potential threats from authorized personnel. System admins can easily configure which face recognition information is captured and recorded in the VMS. Additionally, operators have the ability to search Geutebrück video feeds for alerts using a person’s name, watchlist name, or ID class (threat, no concern, concern, stranger).

“As a world class provider of video security software solutions in mission critical environments, we are thrilled to offer SAFR’s superior technology for face recognition as part of a comprehensive solution. The seamless integration of SAFR’s AI-powered analytics together with Geutebrück’s ultra-robust video management software makes day-to-day operational tasks an effortless experience with the highest reliability,” comments Norbert Herzer, Product Manager, Geutebrück.

The post SAFR Facial Recognition Integrates with Geutebrück VMS first appeared on AI-Tech Park.

]]>
https://ai-techpark.com/safr-facial-recognition-integrates-with-geutebruck-vms/feed/ 0
Intuitive Motion Control Leader, Nexteer Earns Manufacturing Award https://ai-techpark.com/intuitive-motion-control-leader-nexteer-earns-manufacturing-award/ https://ai-techpark.com/intuitive-motion-control-leader-nexteer-earns-manufacturing-award/#respond Fri, 02 Apr 2021 11:47:03 +0000 https://ai-techpark.com/?p=18737 Demonstrates Ongoing Commitment to Manufacturing 4.0 Advancement Marks Fourth Consecutive Year Receiving Award Nexteer Automotive, a leader in intuitive motion control, has been recognized as a 2021 Manufacturing Leadership Awards winner for its outstanding achievement in Enterprise Integration Technology Leadership. Nexteer’s winning project, Manufacturing Engineering Equipment Database (MEED), is part of...

The post Intuitive Motion Control Leader, Nexteer Earns Manufacturing Award first appeared on AI-Tech Park.

]]>
Demonstrates Ongoing Commitment to Manufacturing 4.0 Advancement

Marks Fourth Consecutive Year Receiving Award

Nexteer Automotive, a leader in intuitive motion control, has been recognized as a 2021 Manufacturing Leadership Awards winner for its outstanding achievement in Enterprise Integration Technology Leadership.

Nexteer’s winning project, Manufacturing Engineering Equipment Database (MEED), is part of the Company’s Digital Trace™ Manufacturing strategy. MEED serves as the Company’s global standard process for equipment launch tracking. The innovative enhancements made in the 2.0 version allow Nexteer to easily and collaboratively track equipment status to timed objectives and produce real-time metrics on its Manufacturing Engineering dashboard. MEED outputs are also linked with other Nexteer digital programs – thus completing the Company’s digital thread of data from engineering planning to operational execution.

“Nexteer is honored to be recognized by the National Association of Manufacturers for the fourth straight year for our commitment to manufacturing excellence. Each of our award-winning projects have grown our capabilities, enhanced our efficiency and strengthened our commitment to Manufacturing 4.0 advancement,” said Robin Milavec, Senior Vice President, Executive Board Director, Chief Technology Officer (CTO) and Chief Strategy Officer (CSO), Nexteer Automotive. “Our 2021 award-winning project, MEED, is a valuable tool in our Manufacturing Engineering toolbox that further enhances our Digital Trace™ Manufacturing strategy.”

In 2020, Nexteer earned a Manufacturing Leadership Award by the National Association of Manufacturers for outstanding achievement in Manufacturing Engineering Global Talent Management and Training. In 2019, Nexteer was recognized with a Manufacturing Leadership Award for Enterprise Integration and Technology Leadership, and in 2018, Nexteer received the Engineering and Production Technology Award from the National Association of Manufacturers (then Manufacturing Leadership Council).

Nexteer will be formally recognized at the Manufacturing Leadership Awards Gala, which will take place as a virtual event on May 19, 2021.

The post Intuitive Motion Control Leader, Nexteer Earns Manufacturing Award first appeared on AI-Tech Park.

]]>
https://ai-techpark.com/intuitive-motion-control-leader-nexteer-earns-manufacturing-award/feed/ 0
SignAll Launches Ace ASL App: AI Will Help to Learn Sign Language https://ai-techpark.com/signall-launches-ace-asl-app-ai-will-help-to-learn-sign-language/ https://ai-techpark.com/signall-launches-ace-asl-app-ai-will-help-to-learn-sign-language/#respond Fri, 02 Apr 2021 05:00:00 +0000 https://ai-techpark.com/?p=18579 AI-empowered solution enables users to practice American Sign Language (ASL). The system uses a camera to recognize signing and provide feedback. Ace ASL app is based on the same sign recognition technology that makes possible automated and spontaneous translation between American Sign Language and English. The mobile application is the...

The post SignAll Launches Ace ASL App: AI Will Help to Learn Sign Language first appeared on AI-Tech Park.

]]>
AI-empowered solution enables users to practice American Sign Language (ASL). The system uses a camera to recognize signing and provide feedback. Ace ASL app is based on the same sign recognition technology that makes possible automated and spontaneous translation between American Sign Language and English. The mobile application is the first ASL learning app to provide real-time feedback on signing.

Continuing its course on employing SignAll’s technology for mobile and online solutions, the company launches its first mobile application for learning American Sign Language.

“Today we present the first mobile solution that can track and analyze users’ signing in real time. Thanks to AI technology, used for automated translation between ASL and English, we can offer an application that elevates the experience of learning sign language. When it comes to spoken languages, there are many apps that allow users to practice their pronunciation and get immediate feedback from the app. However, this functionality was unimaginable for learning sign languages. Now, we’re unveiling a mobile application for interactive learning of fingerspelling and practicing fingerspelling recognition as the first step. We expect to extend the functionality and offer more apps for more and more complex learning. We see the particular importance of this news in increasing equality for languages. Placing sign languages on the same level as verbal languages is a part of SignAll’s mission.” – Zsolt Robotka CEO SignAll Technologies.

The application includes a learning section, structured as units. Quizzes at the end of each unit allow for quick and easy self-assessment. The “Free Practice” section provides users a chance to practice their expressive and receptive fingerspelling skills. Receptive practice improves users’ ability to recognize fingerspelling from others in three different speeds: easy, medium and advanced. The “Challenge” section is for confident users to practice what they’ve learned. It is built with a growing level of complexity. Successful results unlock following levels.

Ace ASL is available in the App Store for iOS devices starting from April 1, 2021. Development for Android is underway and is expected to be available in the Google Play Store in the coming months.

We welcome journalists and bloggers to test the application on iOS. Contact me, introducing yourself, your role, and the media outlet. You will receive a personal code to get advanced access to all of the app’s functions.

The post SignAll Launches Ace ASL App: AI Will Help to Learn Sign Language first appeared on AI-Tech Park.

]]>
https://ai-techpark.com/signall-launches-ace-asl-app-ai-will-help-to-learn-sign-language/feed/ 0
Intel AI-Powered Backpack Helps Visually Impaired Navigate World https://ai-techpark.com/intel-ai-powered-backpack-helps-visually-impaired-navigate-world/ https://ai-techpark.com/intel-ai-powered-backpack-helps-visually-impaired-navigate-world/#respond Fri, 26 Mar 2021 12:29:14 +0000 https://ai-techpark.com/?p=17916 Jagadish K. Mahendran models his AI-powered, voice-activated backpack that can help the visually impaired navigate and perceive the world around them. What’s New: Artificial intelligence (AI) developer Jagadish K. Mahendran and his team designed an AI-powered, voice-activated backpack that can help the visually impaired navigate and perceive the world around them....

The post Intel AI-Powered Backpack Helps Visually Impaired Navigate World first appeared on AI-Tech Park.

]]>
Jagadish K. Mahendran models his AI-powered, voice-activated backpack that can help the visually impaired navigate and perceive the world around them.

What’s New: Artificial intelligence (AI) developer Jagadish K. Mahendran and his team designed an AI-powered, voice-activated backpack that can help the visually impaired navigate and perceive the world around them. The backpack helps detect common challenges such as traffic signs, hanging obstacles, crosswalks, moving objects and changing elevations, all while running on a low-power, interactive device.

“Last year when I met up with a visually impaired friend, I was struck by the irony that while I have been teaching robots to see, there are many people who cannot see and need help. This motivated me to build the visual assistance system with OpenCV’s Artificial Intelligence Kit with Depth (OAK-D), powered by Intel.”
–Jagadish K. Mahendran, Institute for Artificial Intelligence, University of Georgia

Why It Matters: The World Health Organization estimates that globally 285 million people are visually impaired. Meanwhile, visual assistance systems for navigation are fairly limited and range from Global Positioning System-based, voice-assisted smartphone apps to camera-enabled smart walking stick solutions. These systems lack the depth perception necessary to facilitate independent navigation.

“It’s incredible to see a developer take Intel’s AI technology for the edge and quickly build a solution to make their friend’s life easier,” said Hema Chamraj, director, Technology Advocacy and AI4Good at Intel. “The technology exists; we are only limited by the imagination of the developer community.”

How It Works: The system is housed inside a small backpack containing a host computing unit, such as a laptop. A vest jacket conceals a camera, and a fanny pack is used to hold a pocket-size battery pack capable of providing approximately eight hours of use. A Luxonis OAK-D spatial AI camera can be affixed to either the vest or fanny pack, then connected to the computing unit in the backpack. Three tiny holes in the vest provide viewports for the OAK-D, which is attached to the inside of the vest.

“Our mission at Luxonis is to enable engineers to build things that matter while helping them to quickly harness the power of Intel AI technology,” said Brandon Gilles, founder and chief executive officer, Luxonis. “So, it is incredibly satisfying to see something as valuable and remarkable as the AI-powered backpack built using OAK-D in such a short period of time.”

The OAK-D unit is a versatile and powerful AI device that runs on Intel Movidius VPU and the Intel® Distribution of OpenVINO™ toolkit for on-chip edge AI inferencing. It is capable of running advanced neural networks while providing accelerated computer vision functions and a real-time depth map from its stereo pair, as well as color information from a single 4k camera.

A Bluetooth-enabled earphone lets the user interact with the system via voice queries and commands, and the system responds with verbal information. As the user moves through their environment, the system audibly conveys information about common obstacles including signs, tree branches and pedestrians. It also warns of upcoming crosswalks, curbs, staircases and entryways.

More Context: A Vision System for the Visually Impaired (Case Study) | Visual Assistance System for the Visually Impaired (Video) | Intel OpenVINO Toolkit | Artificial Intelligence at Intel

The post Intel AI-Powered Backpack Helps Visually Impaired Navigate World first appeared on AI-Tech Park.

]]>
https://ai-techpark.com/intel-ai-powered-backpack-helps-visually-impaired-navigate-world/feed/ 0