Beyond Words: Large Language Models Expand AI’s Horizon

Image credit: NVIDIA
Media Release by NVIDIA

Back in 2018, BERT got people talking about how machine learning models were learning to read and speak. Today, large language models, or LLMs, are growing up fast, showing dexterity in all sorts of applications.

They’re, for one, speeding drug discovery, thanks to research from the Rostlab at Technical University of Munich, as well as work by a team from Harvard, Yale and New York University and others. In separate efforts, they applied LLMs to interpret the strings of amino acids that make up proteins, advancing our understanding of these building blocks of biology.

It’s one of many inroads LLMs are making in healthcare, robotics and other fields.

A Brief History of LLMs

Transformer models — neural networks, defined in 2017, that can learn context in sequential data — got LLMs started.

Researchers behind BERT and other transformer models made 2018 “a watershed moment” for natural language processing, a report on AI said at the end of that year. “Quite a few experts have claimed that the release of BERT marks a new era in NLP,” it added.

Developed by Google, BERT (aka Bidirectional Encoder Representations from Transformers) delivered state-of-the-art scores on benchmarks for NLP. In 2019, it announced BERT powers the company’s search engine.

Google released BERT as open-source software, spawning a family of follow-ons and setting off a race to build ever larger, more powerful LLMs.

For instance, Meta created an enhanced version called RoBERTa, released as open-source code in July 2017. For training, it used “an order of magnitude more data than BERT,” the paper said, and leapt ahead on NLP leaderboards. A scrum followed.

Scaling Parameters and Markets

For convenience, score is often kept by the number of an LLM’s parameters or weights, measures of the strength of a connection between two nodes in a neural network. BERT had 110 million, RoBERTa had 123 million, then BERT-Large weighed in at 354 million, setting a new record, but not for long.

In 2020, researchers at OpenAI and Johns Hopkins University announced GPT-3, with a whopping 175 billion parameters, trained on a dataset with nearly a trillion words. It scored well on a slew of language tasks and even ciphered three-digit arithmetic.

“Language models have a wide range of beneficial applications for society,” the researchers wrote.

Experts Feel ‘Blown Away’

Within weeks, people were using GPT-3 to create poems, programs, songs, websites and more. Recently, GPT-3 even wrote an academic paper about itself.

“I just remember being kind of blown away by the things that it could do, for being just a language model,” said Percy Liang, a Stanford associate professor of computer science, speaking in a podcast.

GPT-3 helped motivate Stanford to create a center Liang now leads, exploring the implications of what it calls foundational models that can handle a wide variety of tasks well.

Toward Trillions of Parameters

Last year, NVIDIA announced the Megatron 530B LLM that can be trained for new domains and languages. It debuted with tools and services for training language models with trillions of parameters.

“Large language models have proven to be flexible and capable … able to answer deep domain questions without specialized training or supervision,” Bryan Catanzaro, vice president of applied deep learning research at NVIDIA, said at that time.

Making it even easier for users to adopt the powerful models, the NVIDIA Nemo LLM service debuted in September at GTC. It’s an NVIDIA-managed cloud service to adapt pretrained LLMs to perform specific tasks.

Transformers Transform Drug Discovery

The advances LLMs are making with proteins and chemical structures are also being applied to DNA.

Researchers aim to scale their work with NVIDIA BioNeMo, a software framework and cloud service to generate, predict and understand biomolecular data. Part of the NVIDIA Clara Discovery collection of frameworks, applications and AI models for drug discovery, it supports work in widely used protein, DNA and chemistry data formats.

NVIDIA BioNeMo features multiple pretrained AI models, including the MegaMolBART model, developed by NVIDIA and AstraZeneca.

LLMs Enhance Computer Vision

Transformers are also reshaping computer vision as powerful LLMs replace traditional convolutional AI models. For example, researchers at Meta AI and Dartmouth designed TimeSformer, an AI model that uses transformers to analyze video with state-of-the-art results.

Experts predict such models could spawn all sorts of new applications in computational photography, education and interactive experiences for mobile users.

In related work earlier this year, two companies released powerful AI models to generate images from text.

OpenAI announced DALL-E 2, a transformer model with 3.5 billion parameters designed to create realistic images from text descriptions. And recently, Stability AI, based in London, launched Stability Diffusion.

Writing Code, Controlling Robots

LLMs also help developers write software. Tabnine — a member of NVIDIA Inception, a program that nurtures cutting-edge startups — claims it’s automating up to 30% of the code generated by a million developers.

Taking the next step, researchers are using transformer-based models to teach robots used in manufacturing, construction, autonomous driving and personal assistants.

For example, DeepMind developed Gato, an LLM that taught a robotic arm how to stack blocks. The 1.2-billion parameter model was trained on more than 600 distinct tasks so it could be useful in a variety of modes and environments, whether playing games or animating chatbots.

“By scaling up and iterating on this same basic approach, we can build a useful general-purpose agent,” researchers said in a paper posted in May.

It’s another example of what the Stanford center in a July paper called a paradigm shift in AI. “Foundation models have only just begun to transform the way AI systems are built and deployed in the world,” it said.

Advertisement

Accenture and Google Cloud expand partnership to accelerate value from technology, data, AI

Image credit: Accenture

Accenture and Google Cloud announced the expansion of their global partnership through a reaffirmed commitment to expanding their individual talent pools.

The partnership enhances their shared capabilities, creating novel data- and AI-driven solutions, and enhancing support for clients as they forge a solid digital foundation and reimagine their businesses in the cloud.

Today, Accenture said over 13,000 knowledgeable Accenture cloud specialists with more than 5,000 Google Cloud certifications assist businesses in forming, moving, and running their operations on Google Cloud. Accenture and Google Cloud will devote greater resources going ahead to support clients running cloud-first enterprises and delivering value within condensed timeframes, building on years of successful engagement.

Google Cloud CEO Thomas Kurian stated to carry out their digital transformations and quickly realise the commercial value of their cloud investments, large, international enterprises need to hire highly qualified business transformation consultants.

“GM already has binding agreements securing all battery raw material supporting our goal of 1 million units of annual capacity in North America by the end of 2025. This new collaboration builds on those commitments as we look to secure supply through the end of the decade, while also helping continue to expand the EV market,” Kurian added.

Accenture Cloud First global lead Karthik Narain said the cloud provides businesses with countless potential to become more inventive and resilient, but the real world is full of value obstacles.

“Our expanded partnership with Google Cloud is designed to help clients build a strong digital core utilizing Google Cloud infrastructure and products in areas like cybersecurity, data analytics, application modernization and more. A strong digital core helps companies respond to change and shifting dynamics within their industry. By working with Google Cloud to expand talent and pre-build industry-specific, productized solutions, we will accelerate time to value for clients on Google Cloud, from public to edge and everything in between,” Narain stated.

New Partnership Investment

Accenture and Google Cloud will support businesses in utilising the full potential of data and the cloud, including:

  • Google Cloud Talent Creation: Accenture will increase its 15,000 Google Cloud credentials in fields like mainframe migration, cybersecurity, sustainability, and application modernisation.
  • New Solutions Powered by Google Cloud: For specialised industrial use cases like customer transformation, sales and marketing optimisation, smart analytics, visual inspection, and others, Accenture and Google Cloud will create solutions and accelerators.
  • New Global Innovation Hubs: To quickly iterate, pilot, and deploy cutting-edge solutions on Google Cloud, Accenture and Google Cloud will continue to invest in new collaborative Innovation Hubs in Dublin and other worldwide locations. Specialized technological use cases for data analytics, AI, ML, application modernisation, infrastructure, security, and SAP will be addressed by a joint engineering centre of excellence.
  • ai.RETAIL Optimised for Google Cloud: Accenture’s integrated retail platform, ai.RETAIL, will be customised to make the most of Vertex AI and Google Cloud’s Product Discovery capabilities to help businesses increase customer engagement and conversion rates and create a more sustainable supply chain.
Advertisement

WISeKey and OISTE.org Foundation Presented AI and Digital Identity at the Global Dialog on the Interplay Between Human Intelligence and Artificial Intelligence

Image credit: WISeKey
Media Release by WISeKey

WISeKey International Holding Ltd, a leading global cybersecurity, AI, Blockchain and IoT company and the OISTE.org foundation presented the subject of AI and Digital Identity at the Global Dialog on the interplay between Human Intelligence and Artificial Intelligence.

The United Nations Alliance of Civilizations (UNAOC) and the Fundación Onuart are co-organizing a high-level global dialogue on the interplay between Human Intelligence and A! for better public affairs management, to pave the way for new private sector initiatives that bolster positive global development driven by innovation and inclusiveness.

AI, in the service of humanity, can drive diversity, respect and progress for all through innovation. Moving forward under a strong public-private partnership is essential to this end. The Global Dialogue aims to further strengthen this cooperation.

The two-day event will be inaugurated by high-level keynote speakers, with participants from the public and private sector bringing forth their expertise in the field. Speakers will discuss and share with the audience how their organizations and companies approach AI and AI-enabled technologies and expound on their vision moving forward. 

Carlos Moreira, WISeKey’s Founder and CEO, during his presentation highlighted the need to use decentralized technologies related to Digital Identity and Blockchain which are in line with the United Nation’s Sustainable Development Goals aiming on providing every person on the planet with a solid and tamper-proof digital identity based on common, interoperable standards by 2030.

A digital identity under the control of the person is a fundamental human right which is not currently protected neither understood. It is also an endangered right due the exponentiality of the technology. Current digital technologies track and scrutinize us by taking into consideration only our consumer identity and not our human identity. 

The digital economy considers every click, search or like as an asset to be monetized. Our lives, reflected in cyberspace, are plundered for behavioral data for the sake of a system that converts our freedom into profit. We are quietly being domesticated into accepting as normal that decision rights vanish before we even know that there is a decision to make. 

We, collectively as humans, have to decide if we are building a better future for humanity with the help of magnificent technology…or building a future of better technology at the expense of humanity.
We’ve been down this road before, and it didn’t turn out well. We collectively made the wrong decision—or, better said, we didn’t make the right decision fast enough. We didn’t put humanity first and instead got caught up in the promise of technology. 
“A new awareness infused by a human-rights based approach that consider each individual “netizen” as a dignified moral being, worth of respect, is required. Otherwise, our connectivity will continue to offer a perverse amalgam of empowerment inextricably layered with diminishment,” said Mr. Moreira.  

Advertisement

Iveda JV targets Philippines municipalities for Smart City technology

0
Image credit: Iveda

Iveda announced today the formation of the Iveda Philippines joint venture (JV), establishing the company’s presence in a new and rapidly developing market for Smart City technologies.

Iveda is a global provider of artificial intelligence (AI) video search, Internet of Things (IoT), video surveillance, and digital transformation technologies for Smart Cities.

Iveda forecasts that revenue per Smart City implementation will range from $1 million to $3 million, including recurring revenue from annual payments for software maintenance and updates. As is the case in most foreign nations, having a local presence is critical to developing and maintaining the relationships required to execute long-term mutually beneficial partnerships.

Iveda CEO David Ly and local Iveda Philippines officials have presented to municipal leaders from North to South interested in Iveda Smart City technologies, including Manila, Paranaque, Mandaluyong, Cebu, Mandaue, and Davao. The Iveda Philippines team also showcased IvedaAI benefits to Gen. Vicente Danao and his employees from the Philippine National Police, the country’s law enforcement body.

IvedaAI has been trained to recognise and detect Jeepneys and tricycles, which are unique to the Philippines and the most prevalent mode of transportation in the country. During the typhoon season, flooding has also afflicted the entire country. IvedaAI can detect impending flooding dangers and notify appropriate agencies before they do major harm to people and property. Most importantly, IvedaAI provides intelligence to “existing” video surveillance cameras around the country, assisting in the improvement of public safety, traffic intelligence, and the overall efficacy of “existing” infrastructure without requiring an overhaul.

“In addition to IvedaAI, city officials also asked for help on critical infrastructure improvements such as new command center and upgrades to existing ones, similar to our Taiwan Smart City projects in Taiwan,” Ly said.

In recent years, the Philippine Government has expanded its dependence on current and next-generation technologies, particularly in the aftermath of the Taal Volcano eruption and the 2021 Odette typhoon. According to the International Trade Association, cities across the country have since digitised services and established standards for data storage, security, and utilisation.

Stef Sao, the managing director, leads Iveda Philippines. Sao received his education in the United States, where he earned a Master of Science Degree in Systems Science and Mathematics, a Bachelor of Science Degree in Systems Science and Engineering, and a Bachelor of Arts Degree in Economics from Washington University.

“The team has been engaging the municipality decision makers non-stop, demonstrating Iveda’s unmatchable value.  Customers are responding very well to the crystal-clear message of a cost-efficient method of enhancing the effectiveness of their existing video surveillance and command centers,” Saño said.

Iveda’s first customer in the Philippines was the Philippine Long Distance Telecommunications Company (PLDT), who purchased an IvedaXpress plug-and-play video surveillance system. Iveda has aimed to expand its presence in the Philippines, following the success of the company’s IvedaAI intelligent video search engine project in Caloocan City in 2018 and a successful pilot project in Metro Manila in 2019. Iveda was hosted at meetings by municipal officials across the archipelago’s three main islands, Luzon, Visayas, and Mindanao, last month.

“We see the potential in the Philippines much in the same way we are now seeing the success of Iveda Taiwan. Gaining a foothold in a country that is looking to modernize all aspects of its infrastructure will be beneficial to the Filipino people as well as Iveda overall,” Ly said.

In an address to the New York Stock Exchange last month, President Sabin Aboitiz of Aboitiz Group said that the Philippines is one of fastest growing economies and the next big thing in Asia.

“Now more than ever, with the dawn of a new era of digital progress, and an environment that has never been more enabling and conducive for business, the Philippines is ripe and open for investment,” Aboitiz stated.

Numerous market research firms estimate that the worldwide Smart City industry will develop in the future. MarketsAndMarkets predicts that the current global Smart City market will reach $873.7 billion in 2026, up from $457 billion in 2021.

Advertisement

IBM and Tietoevry Form a Global Collaboration Taking a New Step Forward in the Development of Financial Services Technology

0
Image credit: IBM
Media Release by IBM

Today at Sibos 2022, IBM and Tietoevry, an IT software and services company headquartered in Finland announced a global technology and consulting collaboration for its financial technology software unit Tietoevry Banking to help deliver secured and innovative solutions for payment and transaction technology to banks globally.

Payment transactions, card payments and instant payments are a rapidly changing and growing industry that affects millions of households globally. As payment transactions continue to grow, cybercrime and data breaches are also on the rise. It has been estimated that financial services may suffer billions of dollars in losses due to cybercrime. Tietoevry Banking chose to collaborate with IBM to leverage IBM Cloud for Financial Services and IBM Consulting to help accelerate clients’ hybrid cloud adoption while balancing the need to address security and compliance requirements with driving innovation.

Ilkka Korkiakoski, Head of Tietoevry Banking, Payments: “For Tietoevry Banking, the collaboration signifies a leap into a financial market sector that is on another scale. Together with IBM we can offer significant added value to the financial sector: scalable consultancy practice for system integration and volume transition to cloud designed to support regulatory compliance, best practices from the SaaS and SLA industries, automation and flexibility in performance and volumes processing.  Combining IBM’s state of the art with our deep knowledge in payments and cards is a perfect match to drive the payments industry to the next level. It will open new doors and can extend our footprint and delivery capability outside the Northern Europe”, Ilkka Korkiakoski says.

Tietoevry Banking provides scalable and modular Banking as a Service and software built by extensive industry expertise, accelerating the digital transformation of financial institutions focused on clients across the Nordics and globally.  In the synergetic collaboration, Tietoevry Banking will onboard its Payments software portfolio, such as Card Suite, Payment Hub, Virtual Account Management, Instant Payments Solutions, to IBM Cloud for Financial Services and provide its managed services and SaaS capabilities to help global banks facilitate card management and payments on an industry cloud with built in security and compliance controls. The collaboration aims to help banks address the industry’s stringent compliance, security, and resiliency requirements – while supporting business transformation, volume migration to new platforms and innovation. In addition, IBM Consulting and Tietoevry Banking will work together to help joint clients transform and modernize at scale and support them in their journey to a hybrid cloud.

Paul Krogdahl, CTO, Global Core Banking & Payments ISV Practice at IBM said: “IBM has a long history working with clients in highly regulated industries and IBM Cloud for Financial Services was designed to help banks drive innovation and to support security and compliance needs. This initiative with Tietoevry Banking further expands our technology portfolio of financial services ecosystem partners with additional capabilities. We expect to see robust growth in the payments, cards and banking services as a platform space as an increasing number of banks globally choose to consume services as part their business strategy rather than buying or building software on their own.”

As part of this work, IBM Consulting will be a global system integrator, implementation partner and managed operations partner for Tietoevry Banking’s payment software portfolio. IBM Consulting will provide the deep financial services expertise needed to bring long-term value to both existing and future clients and help guide their transformation. This collaboration will also help Tietoevry Banking advance its digitization strategy and enhance its payments solution with increased security and scalability. 

Advertisement

AUCloud launch Australia’s first Sovereign Quantum-Safe Encryption Service powered by Arqit Quantum Cloud™

Image credit: Sovereign Cloud Australia Pty Ltd
Media Release by AUCloud

Sovereign Cloud Australia Pty Ltd (AUCloud), a leader in sovereign Infrastructure-as-a-Service (IaaS) and Arqit Quantum Inc. (Arqit), a leader in quantum-safe encryption, are pleased to announce general availability of the Asia Pacific region’s first Quantum Safe Symmetric Key Agreement Software. 

Powered by Arqit’s QuantumCloud™, the service is available now from AUCloud as a Platform as a Service (PaaS), enabling quantum-safe encryption capability for the Australian market and near-Region customers. The service enables governments and enterprises to protect today against “Harvest Now, Decrypt Later” quantum computing attacks and greatly improve the security of a variety of IoT, defence and financial services applications, which is unachievable with other post quantum cryptographic methods. Arqit is the only company in the world to publish an independent assurance report demonstrating that its software makes keys which are Zero Trust and Computationally Secure.    

As leaders in the quantum environment, this is a critical milestone for Australia’s capability, and is a demonstration of the close collaboration promoted by the AUKUS trilateral security pact between Australia, the United Kingdom, and the United States, announced on 15 September 2021.     

Phil Dawson, Managing Director and Co-Founder of AUCloud, said: “In partnership with Arqit, we are proud to offer the first Symmetric Key Agreement Software (SKAS) service to the Australian market and near-Region customers.  With our strong commitment to sovereign data protection, we are hugely excited by the potential of AUCloud’s sovereign Symmetric Key Agreement Software to provide post quantum safety. At a time when the public is even more conscious of personal data security and in light of the recent announcement by the Australian Government on its consultation process on their quantum strategy, AUCloud’s SKAS will further reduce the risk of data compromise for all Australians. Experience providing SKAS to early adopter customers will create even greater Australian leadership in this important and developing market.”  

Arqit Founder, Chairman and CEO David Williams, said: “We are excited to offer Arqit’s globally unique software in Australia in partnership with AUCloud. Our partnership builds on the huge amount of work Australia is doing in the quantum space and will enable us to deliver significant advantage to sovereign customers.” 

Advertisement

CSIRO R&D program to assist businesses mitigate increased cyber security risk

0
Image credit: CSIRO

CSIRO, Australia’s national science agency, assists in combating the growing threat of cyber attacks by giving free research and development support to enterprises in the cyber security industry.

In a statement, CSIRO said small and medium-sized enterprises (SMEs) working on novel cyber security solutions could participate in its free, 10-week online Innovate to Grow program, which provides research and development expertise.

Upon completion of the program, participants may be able to get CSIRO support to connect to national research knowledge, as well as dollar-matched R&D funding.

Surya Nepal, CSIRO’s Data 61 Group Leader, stated that cyber security threats were an increasing issue around the world, affecting a wide range of industries.

“Cyber criminals are constantly finding new ways to carry our cyber-attacks, which can have devastating impacts for companies and consumers,” Nepal said.

According to the Australian Cyber Security Centre, a 13 per cent increase in cybercrime was reported in the 2020-21 fiscal year.

CSIRO’s SME Connect Deputy Director George Feast said innovative solutions are necessary to keep ahead of these.

“Much of this can be driven by SMEs – who make up 99.8 per cent of all businesses in Australia – developing new cyber products and services powered by R&D. However, R&D can be an expensive undertaking for businesses and risky for those without the right guidance and support,” Feast added.

Feast stated that CSIRO invites participants to come up with a specific cyber security business idea that they’d like to explore through its Innovate to Grow program. 

“Over 10 weeks we’ll step businesses through how to refine their idea, to understand its research viability, and begin engaging a university or research institution to deliver a collaborative R&D project,” Feast said.

According to a CSIRO study published last year, despite the importance of collaboration in driving good R&D outcomes, less than 15 per cent of Australian businesses engage universities or research institutes for their innovation operations.

Businesses will also be exposed to industry knowledge, hear from innovation and industry experts, and work with an R&D mentor, as well as draw into CSIRO’s own cyber security expertise through Data61, CSIRO’s data and digital speciality arm.

Rezilens Operations Project Manager Corey Fraser recently completed the Innovate to Grow: Cyber Security program, which helps make enterprise-level cybersecurity affordable and accessible to Australian SMEs.

“This was essentially our first opportunity to pursue formal R&D, as we’re still fairly young – just under two years in operation. What was really appealing for us through this program was the exposure to academics and NGOs in the security space, along with the associated benefit of learning from their industry expertise. And finding out about how we could access potential funding opportunities,” Fraser said.

Fraser suggests the initiative to other start-ups that lack the funds and time to undertake these kinds of operations. According to Fraser, the fact that the experience was coordinated by CSIRO greatly improved the program’s structure and uniformity, and the lack of associated fees was a significant plus as well.

Eligible organisations can be actively involved in cyber security or work in other industries that provide online solutions to their consumers, such as agriculture and health care, and aim to increase the cyber security part of their offerings.

CSIRO’s Innovate to Grow: Cyber Security program is funded by the Australian Government Department of Industry, Science, Energy and Resources through the Cyber Security Skills Partnership Innovation Fund.

Advertisement

IBM and CEO Arvind Krishna Welcome President Biden to Poughkeepsie Site, Company Plans to Invest $20 billion in the Hudson Valley Region Over 10 Years

Image credit: IBM
Media Release by IBM

Today U.S. President Joseph R. Biden, Jr. and IBM (NYSE: IBM) Chairman and CEO Arvind Krishna will tour IBM’s Poughkeepsie, New York site to see firsthand where the future of computing is being innovated, designed and manufactured. During the visit, IBM will announce a plan to invest $20 billion across the Hudson Valley region over the next 10 years. The goal of the investments, which will be strengthened by close collaboration with New York State, is to expand the vibrant technology ecosystem in New York to unlock new discoveries and opportunities in semiconductors, computers, hybrid cloud, artificial intelligence and quantum computers.

IBM has long called New York state home, and its business supports more than 7,500 jobs across the Hudson Valley. This region has been a hub of innovation and manufacturing for decades. From Westchester County to Poughkeepsie to Albany, IBMers are pushing the limits of computing and helping clients embrace digital transformation. 

“IBM is deeply honored to host President Biden at our Poughkeepsie site today and we look forward to highlighting our commitments to the innovations that advance America’s economy,” Arvind Krishna, Chairman and CEO of IBM, said. “As we tackle large-scale technological challenges in climate, energy, transportation and more, we must continue to invest in innovation and discovery – because advanced technologies are key to solving these problems and driving economic prosperity, including better jobs, for millions of Americans.”

President Biden’s visit to the IBM Poughkeepsie site highlights the CHIPS and Science Act’s unique opportunity to advance American innovation and manufacturing. The technology that IBM delivers today from Poughkeepsie will directly benefit from the CHIPS and Science Act that the President recently signed into law. It will ensure a reliable and secure supply of next-generation chips for today’s computers and artificial intelligence platforms as well as fuel the future of quantum computing by accelerating research, expanding the quantum supply chain, and providing more opportunities for researchers to explore business and science applications of quantum systems. 

IBM’s Poughkeepsie site has helped the country embrace the transformative power of technology since 1941, from manufacturing armaments during World War II to developing and building the latest generation mainframe computers. In Poughkeepsie, IBM builds state-of-the-art mainframe computers that power the global economy. The site also is home to IBM’s first Quantum Computation Center – where a large number of real quantum computers run in the cloud. IBM’s vision is for Poughkeepsie to become a global hub of the company’s quantum computing development, just as it is today for mainframes.

The future of semiconductor technologies also is being created in the Hudson Valley, from Yorktown Heights to Albany and beyond. In Albany, a unique public-private semiconductor ecosystem is where IBM last year announced the first 2 nanometer chip technology, one of the semiconductor industry’s biggest breakthroughs of the last decade. The expansion of Albany’s collaborative innovation model could be a foundation for the National Semiconductor Technology Center (NSTC) that will be implemented as part of the CHIPS and Science Act.

IBM’s announcement today builds on and expands these investments in the future of American innovation, and will fuel economic growth and job opportunities for people of all backgrounds to work with cutting-edge systems and accelerate the pace of discovery across the Hudson Valley.

Advertisement

How Google’s commitment to open source unleashes AI and ML innovation

Image credit: Google Cloud

Google firmly believes that anyone should be able to swiftly and simply realise their artificial intelligence (AI) concept.

In a blog, Google said open source software (OSS) has become increasingly vital in achieving this goal, significantly impacting the rate of innovation in the AI and machine learning (ML) ecosystems.

With investments in projects and ecosystems like TensorFlow, Jax, and PyTorch, Google has used machine learning (ML) to revolutionise several of its services over the past 20 years, including Search, YouTube, Assistant, and Maps.

Many AI systems rely on closed or exclusive methodologies; hence these OSS efforts are crucial. This wall-garden strategy stifles innovation, hinders efforts to make AI explainable, ethical, and equitable, and raises entry barriers for developers.

Google asserted that it is dedicated to open ecosystems because we are adamant that no one company should possess AI/ML innovation. In the blog post, Google explores some of its major OSS AI and ML contributions from recent years. It also discusses how its dedication to open technology might aid businesses in innovating more quickly and adaptably.

According to three pillars, Google’s open source activities support and facilitate AI initiatives:

  • Access – The most recent ML technology may be utilised by developers, researchers, and businesses of all sizes thanks to OSS. It is essential to democratising ML innovation, enabling customer choice and variety in software and reducing operating costs while speeding up scalability for all.
  • Transparency – Open source data sets, ML algorithms, training models, frameworks, and compilers provide due diligence and community evaluation. This is crucial for ML since it supports reproducibility, interpretability, equity, and increased security.
  • Innovation – More innovation happens spontaneously due to increased access and transparency. Google’s clients and partners use open source ML frameworks and toolsets to spur additional innovation in the industry.

TensorFlow, JAX, TFX, MLIR, KubeFlow, and Kubernetes are just a few of the open source software projects that Google has contributed to during the past two decades. It has also sponsored important OSS data science projects, including Project Jupyter and NumFOCUS. By focusing on these efforts, Google Cloud aims to be the most refined platform for the OSS AI community and ecosystem. Initiatives like these have helped Google become the top contributor to the Cloud Native Computing Foundation (CNCF).

Google’s OSS policy covers the full “idea-to-production” lifecycle, from gathering data to training models to managing infrastructure to encouraging experimentation and model improvement, as the dangers of closed technology can manifest at many stages across ML pipelines:

Data acquisition: starting the journey from idea to production-ready ML model 

Data is the first step in developing an ML model from a concept. TensorFlow Datasets offers a set of useful APIs that make it simple for users to organise their datasets, whether they build with TensorFlow, Jax, or other ML frameworks, in addition to assisting users in acquiring ready-to-use, adaptable, and highly-optimised datasets (including image, audio, and text).

Model development and training: shortening the path from data to useful ML

OSS libraries support ML algorithms’ designers, implementers, testers, and debuggers. On this front, some of Google’s contributors are:

  • The TensorFlow core framework, which provides APIs to assist data scientists and programmers in creating and honing production-grade ML models on distributed and accelerated infrastructure powered by GPUs or TPUs;
  • The fact that Google was a founding member of the PyTorch Foundation, placing Google in a position to promote the use of ML by creating an ecosystem of open source projects around PyTorch;
  • Developers can easily design and train ML models quickly with Keras, a lightweight and robust ML framework that is well integrated with TensorFlow;
  • Model Garden, which offers open source, Google-maintained implementations of numerous cutting-edge computer vision and natural language processing models as well as APIs to quicken training and experimentation;
  • Jax is a lean, intuitive, and modular system that combines automatic differentiation (Autograd) and the Accelerated Linear Algebra (XLA) optimising compiler to provide high-performance ML for rapid research and production; 
  • TensorFlow Hub, a repository of trained ML models that are ready for fine-tuning and deployment; and
  • MediaPipe is a cross-platform open source project that allows users to leverage customisable ML solutions for live and streaming media, including text and video.

ML infrastructure management: scaling valuable models with powerful backends

Accessing and administering ML infrastructure, especially at scale, can be a barrier for many organisations. For this reason, Google has invested in efforts like:

  • The TensorFlow Extended (TFX) platform provides software frameworks and tooling for comprehensive MLOps deployments, assisting developers with data automation, model tracking, performance monitoring, and model retraining;
  • Kubeflow, which makes it easy, portable, and scalable to install ML workflows on Kubernetes; and
  • Selected researchers who publish peer-reviewed articles and/or open source code are eligible for admission to the TRC (TPU Research Cloud), which provides free access to a cluster of more than 1,000 Cloud TPU devices.

Experimentation and model optimisation: encouraging discovery and iteration

Without robust processes for experimentation and optimisation, data, model training tools, and infrastructure can only go so far. For this reason, Google has contributed to projects like xManager, which allows anyone to run and monitor ML experiments locally or on Vertex AI, and Tensorboard, which makes tracking and visualising model performance metrics easier.

These areas of emphasis will benefit not only Google’s clients but also the entire open source AI community.

Advertisement

Intel Hits Key Milestone in Quantum Chip Production Research

Image credit: Intel Corporation
Media Release by Intel 

The Intel Labs and Components Research organizations have demonstrated the industry’s highest reported yield and uniformity to date of silicon spin qubit devices developed at Intel’s transistor research and development facility, Gordon Moore Park at Ronler Acres in Hillsboro, Oregon. This achievement represents a major milestone for scaling and working toward fabricating quantum chips on Intel’s transistor manufacturing processes.

The research was conducted using Intel’s second-generation silicon spin test chip. Through testing the devices using the Intel cryoprober, a quantum dot testing device that operates at cryogenic temperatures (1.7 Kelvin or -271.45 degrees Celsius), the team isolated 12 quantum dots and four sensors. This result represents the industry’s largest silicon electron spin device with a single electron in each location across an entire 300 millimeter silicon wafer.

Today’s silicon spin qubits are typically presented on one device, whereas Intel’s research demonstrates success across an entire wafer. Fabricated using extreme ultraviolet (EUV) lithography, the chips show remarkable uniformity, with a 95% yield rate across the wafer. The use of the cryoprober together with robust software automation enabled more than 900 single quantum dots and more than 400 double dots at the last electron, which can be characterized at one degree above absolute zero in less than 24 hours.

Increased yield and uniformity in devices characterized at low temperatures over previous Intel test chips allow Intel to use statistical process control to identify areas of the fabrication process to optimize. This accelerates learning and represents a crucial step toward scaling to the thousands or potentially millions of qubits required for a commercial quantum computer.

Additionally, the cross-wafer yield enabled Intel to automate the collection of data across the wafer at the single electron regime, which enabled the largest demonstration of single and double quantum dots to date. This increased yield and uniformity in devices characterized at low temperatures over previous Intel test chips represents a crucial step toward scaling to the thousands or potentially millions of qubits required for a commercial quantum computer.

“Intel continues to make progress toward manufacturing silicon spin qubits using its own transistor manufacturing technology,” said James Clarke, director of Quantum Hardware at Intel. “The high yield and uniformity achieved show that fabricating quantum chips on Intel’s established transistor process nodes is the sound strategy and is a strong indicator for success as the technologies mature for commercialization.

“In the future, we will continue to improve the quality of these devices and develop larger scale systems, with these steps serving as building blocks to help us advance quickly,” Clarke said.

Full results of this research will be presented at the 2022 Silicon Quantum Electronics Workshop in Orford, Québec, Canada on Oct. 5, 2022.

Advertisement