startup-together.com

startup-together.com

100 % for Startups : Stories, services and insights to start up with success !

Protecting AI-related innovation

Benjamin Fechner takes a closer look at protecting AI-related innovation.

The article is based on a panel discussion that was conducted with the kind support of the applied AI Initiative, UnternehmerTUM GmbH, in Garching on 28 May 2019. Benjamin prepared and moderated the panel discussion. The panelists were:

  • Yannis Skoulikaris (EPO, Director Operations, Sector Information and Communications Technology, The Hague),
  • George Summerfield (K&L Gates, Partner, Chicago),
  • Michael Schramm (2S-IP, Founding partner, Munich), and
  • Björn Geigle (Computomics, Bioinformatics Scientist, Tübingen).

The sole responsible for this article and its content is the author, Benjamin Fechner. To contact the author send an e-mail to office@drfechner.de. A more recent version of the article is available for download here.

1. INTRODUCTION

Recently, the World Intellectual Property Organisation (WIPO) published a major report (https://www.wipo.int/edocs/pubdocs/en/wipo_pub_1055.pdf) on artificial intelligence (AI) as technology trend. According to the WIPO, “Artificial intelligence is a new digital frontier that will have a profound impact on the world, transforming the way we live and work.”

The WIPO’s statistics show a sudden and massive increase in the number of patent applications related to AI during the past five years. In 2017 the US Patent and Trademark Office (USPTO) received more than 150.000 patent applications in the field. Two thirds of the 500 most active patent filers are companies such as IBM, while one third are universities and public research organisations. One third of the companies is based in the US. Two thirds of the research organisations are Chinese. One sixth of the patent applications relate to telecommunications, one sixth concern transport and one eighth are in the field of life and medical science.

The numbers give a clue to the state of development of artificial intelligence as a technology in its own right and to the importance of AI as a tool in other fields of technology. 

The Russian economist Kondratiev (1892-1938) identified cycles of economic expansion and contraction that were several decades long. According to the Austrian economist Joseph Schumpeter (1883-1950), such long economic cycles could be caused by innovation. Selected innovations spark a technical revolution that first leads to fluctuation in investment and finally to economic growth. 

It seems that technical revolutions that have the potential to cause instability of an economic cycle with subsequent economic growth result from removal of a limitation. Table 1 illustrates examples of innovation that, at its time, had a profound effect on technology and economic activity based on the technology. Table 1 also shows the limitation that the innovation helped overcome. 

Technical Revolution

Effect

Limit overcome

Water power

magnification of individuum’s power

body power

Steam engine

magnification of individuum’s power

machine tied to water stream

Electricity

democratisation of light (education), appliances and tools (independence)

machine tied to power generator

Automobility

democratisation of transport

engine tied to fixture

Semiconductor circuits

democratisation of digital algorithmic signal and data processing

processing tied to analogue sensor

Artificial intelligence

simulation of mental acts such as recognition and forecast

capacity of human brain

Table 1: Technical revolutions, their effects on humans and society, and the respective limitation overcome

Artificial intelligence seems to be the latest revolutionary innovation. It could well have just begun to cause another long cycle of economic activity. Why now? 

The question can be answered in terms of a life-cycle pattern that has been observed in connection with previous technological upheaval: A system based on a certain technology can be seen as a subject of a life-cycle that encompasses six phases. During a first phase, the laboratory-invention phase, early prototypes are made, patents are filed and small-scale use of the new technology may occur. In a second phase, the technical and commercial feasibility of the new technology is demonstrated. Widespread potential applications become visible. Then, during a third phase of turbulent growth, the new technology explosively takes off and brings about a structural crisis in the economy that may also be accompanied by a political crisis as a new regime of regulation is established. During a forth phase, growth continues to be high, the new technology, however, is now widely accepted as a dominant technology. Meanwhile, the use of the new technology in a still wider range of industries may be challenged by adjustment to newer technology. During a fifth phase, the profitability of the new technology erodes. The technological system matures. A new crisis of structural adjustment sets on. Finally, during the sixth phase, the ‘new’ technology is not new any more at all, but mature instead. ‘Renaissance’ effects in co-existence with newer technologies may still occur. But eventually the technology disappears.

For further reading on the nexus between technology and economic activity, reference is made to “As time goes by – From the Industrial Revolutions to the Information Revolution” by Chris Freeman and Francisco Louçã (Oxford, 2001).

Origins of AI can be traced to the works of Leibniz, Boole, and Turing et al. Modern AI research started in 1956 at Dartmouth Summer Research Project on Artificial Intelligence from the work of five scientists: Newell, Simon, McCarthy, Minsky and Samuel. At the time, a bold prediction was made: Machines will be able to perform any work that a human can perform. Table 2 lists the ups and downs in the development of AI technology since the days of its first mentioning.

1956

First mentioning of ‘Artificial intelligence’ (AI) at the Dartmouth Summer Research Project on Artificial Intelligence

1956-74

Golden years of government funding

1974-80

First AI ‘winter’ when capacities failed to meet expectations

1980-87

New optimism associated with the advent of knowledge-based expert systems

1987-93

Second AI ‘winter’ coincides with a collapse of specialized hardware industry

1993-2011

Return of optimism as more data become available and computer power increases

2012-today

AI boom due to more data, connectedness, still further increased computer power

Table 2: Ups and downs during the development of artificial intelligence technology

During the recent AI boom, development related to AI has markedly shifted from scientific development to application. During the past years, AI use has explosively taken off. Today, we see turbulent growth of AI applications. Turbulent in particular, where the use of AI causes redundancies when the need for service providers such as cab drivers and oncologists goes away. Clearly, at least in some professions, the advent of AI brings about a structural crisis. It is entirely possible that a political crisis may also result as a new regime of regulation needs to be established. As will be seen below, conventional instruments to protect intellectual property somehow seem to be insufficient or even a misfit when it comes to protecting AI-related innovation. In particular, the patent system, that is supposed to help economic growth, may not be able to provide adequate protection for inventions relying on artificial intelligence.

2. Artificial Intelligence (AI)
2.1. Definition

There is not ‘one’ definition of AI.

Yannis Skoulikaris, a director in the EPO concerned with the examination of computer-implemented inventions (CII), identified a number of descriptions of AI:

  • AI is a set of technologies that enable machine intelligence to simulate or augment elements of human intelligence.
  • AI is a confluence of three breakthroughs: big data, massive computing power and sophisticated algorithms
  • AI is applied mathematics.
  • AI is an attempt to solve with computers such problems that, if solved by humans, would require intelligence.
  • AI is a transition from ‘Sense and act’ to ‘Sense, think and act’.
  • AI is a transition from ‘computational support’ in the 20th century to ‘cognitive suppport’ in the 21st century.

Other statements made with reference to AI are:

  • AI is the science and engineering of making intelligent machines. (McCarthy)
  • AI is a machine’s adaptation of cognitive functions that are associated with the human mind such as understanding of language, problem-solving, and learning.
  • AI is a set of algorithms that parse data and learn from the data to make informed decisions. The AI process uses algorithms (that iteratively learn from data) and statistical models.
  • AI allows computers to make decisions without having to explicitly program it to perform the task.
  • AI is a bunch of numbers that are undecipherable, multiplied together in ways inexplicable to humans. If someone hacked in and changed a thousand numbers, how would people know? (Kai-Fu Lee, Author of ‘AI Superpowers: China, Silicon Valley, and the New World Order‘)

Recently, an expert group set up by the European Commission produced proposed a definition of AI (https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=56341) :

“Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions.

As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimisation), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems).”

Though authoritative, the expert definition seems to lack conciseness; the ‘definition’ is merely descriptive. Suffice it to say that AI is software. However, in contrast to conventional software which is composed deductively, i.e., which results from a synthesis of functionalities and therefore lends itself to a functional analysis, artificial intelligence is inductively produced, i.e., typically, AI processes an input to the AI such that an output’s dependence on the input is not intelligible in terms of the problem that AI helps to solve. For example, a neural net may be trained to provide a ‘correct’ output responsive to some input fed to the neural net.

The difference in process – inductive versus deductive – matters in practice to patenting. In order for a patent drafter to write a patent claim, the patent drafter must have understood the invention as a solution to a problem. Having regard to conventional software, an understanding is possible by tracing back the path of deduction to reasons that stood at the origin of the path. In contrast, with artificial intelligence the inductive process of its creation makes it hard, if not impossible, to understand cause and effect in terms of problem and solution. Even though an explanation of the output may be possible, for example, in terms of weights applied between neurons of a trained neural net, this explanation does neither suggest any understanding of the input by the AI, nor does it suggest that any output is based on meaning represented by, or otherwise associated with the input.

There are no intelligent computers, i.e., intelligent bundles of hardware and software; there are only computers that are intelligently programmed. Despite the use of the term ‘intelligence’ in AI, artificial intelligence seems to be as stupid as conventional software, i.e., artificial intelligence is no more intelligent than a light bulb. The difference between conventional software and artificial intelligence is in name and in the way the solution to problem is arrived at; the difference is not in the level of intelligence applied to the problem.

2.2. Techniques of artificial intelligence

Artificial intelligence encompasses a variety of techniques; amongst them are:

  • machine learning
  • logic programming
  • fuzzy logic
  • probabilistic engineering
  • ontology engineering

Machine learning is particularly widely used. What is machine learning? Similarly to ‘artificial intelligence’, it seems difficult to identify one generally accepted definition of ‘machine learning’. Rather, there are many descriptions of what machine learning does and effects:

  • Machine learning is a field of study that gives computers the ability to learn without being explicitly programmed.
  • Machine learning is an AI process of data analysis that automates analytical model building. (Arthur Samuel)
  • Machine learning computer programs can teach themselves when exposed to new data. (David Fumo)
  • Machine learning is a computer program that learns from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E. (Tom Mitchell)
  • Machine learning algorithms acquire their own knowledge.

A rather specific definition and – in our opinion – useful definition is this:

  • Machine learning refers to algorithms that build a model on sample data used as training data in order to identify and extract patterns from data.

Machine learning comes in several flavours. The most wide-spread machine learning approaches are supervised learning, unsupervised learning, and reinforcement learning.

  • Supervised learning: Exemplary inputs and their desired outputs are fed to the artificial intelligence to learn a general rule that maps inputs to outputs.
  • Unsupervised learning: Exemplary inputs are fed to the artificial intelligence to find structure such as a hidden pattern or features in the inputs.
  • Reinforcement learning: The artificial intelligence is made to interact with a dynamic environment, perform a certain goal (such as driving a vehicle or playing a game against an opponent). Feedback is processed in terms of rewards and punishments as the program navigates a ‘problem space’.
2.3. Applications

Artificial intelligence is applied in many fields of technology. Patent applications in a variety of technology fields reflect the spreading relevance of artificial intelligence:

  • computer vision, which includes image recognition (49% of all AI-related patents)
  • natural language processing (14% of all AI-related patents)
  • speech processing (13% of all AI-related patents)
  • self-driving cars
  • robotics
  • control methods
  • cybersecurity
  • spam detecting
  • product recommendations
2.4. Problems associated with artificial intelligence

The World Intellectual Property Organisation (WIPO) identified a number of problem fields:

  • Ownership and rights
  • Data privacy and ethics
  • Security
  • Employment
  • Superintelligence

While below we shall take a closer look at the problems related to ownership and rights, other problem fields are outside the scope of this publication. Nonetheless, for the sake of completeness they shall briefly be presented.

Data privacy and ethics: AI equips mankind with unparalleled means to monitor and sift through human behaviours, physiology and biology on such a grand scale. Moreover, free access to data can provide great, personalised experiences, but how open is too open? How can we ensure that citizens retain control over their personal information? Can the thus-gathered knowledge be unethical? Are conclusions drawn from this knowledge unethical? Where to draw the line between that which is ethical and that which is not?

Security: What is the best way to protect critical interconnected systems such as intelligent transportation? How can an increasing volume of data be kept safe?

Employment: On the one hand, it is expected that AI will help machines perform dangerous tasks once done by humans. It will also augment current jobs so that humans can be more accurate and efficient, and keep us safer. On the other hand, it seems that AI will put people out of work. So, how can intelligent machines fit into the world of work? Which jobs will AI change and how?

Superintelligence: AI conjures up a possibility of machine superintelligence. Is a move from narrow AI (using AI for individual tasks) to superintelligence desirable? What happens if intelligent machines exceed the capabilities of the human brain not just in part, but in total? Will such machines develop a consciousness and perhaps even an associated will to defend their existence? It seems that for the human species, the establishment of superintelligence would be a watershed moment: it would be the most important invention ever.

3. How to protect software

Since artificial intelligence can always be implemented in software, instruments for protecting software can generally be used to protect artificial intelligence. Instruments of protection for software are secrecy, copyright and patent. In order to look at these instruments with a broad scope, reference is made to the Agreement on Trade-Related aspects of Intellectual Property rights (TRIPS). In the European Union, the European Parliament and the European Council issued so-called Directives that take account of TRIPS. National law implements the Directives in the member states of the European Union and thereby provides at least the level of protection of intellectual property that is envisaged by TRIPS.

3.1. Secrecy

Natural and legal persons shall have the possibility of preventing information from being disclosed to, acquired by, or used by others without their consent. The information needs to be secret in the sense that it is not generally known among or readily accessible to persons within the circles that normally deal with the kind of information in question. The information must have commercial value because it is secret. And the information must have been subject to reasonable steps under the circumstances, by the person in control of the information, to keep it secret. (Art. 39 TRIPS)

In the European Union, the Directive (EU) 2016/943 forms the basis for national law to protect trade secrets in accordance with TRIPS.

3.2. Copyright

Generally, copyright protection shall extend to expressions. Copyright protection shall not extend to ideas, procedures, methods of operation or mathematical concepts as such. (Art. 9, No. 2 TRIPS)

In particular, computer programs, whether in source or object code, shall be protected as literary works under the Berne Convention (1971). In addition, compilations of data or other material, whether in machine readable or other form, which by reason of the selection or arrangement of their contents constitute intellectual creations shall be protected as such. Such protection shall not extend to the data or material itself. (Article 10 TRIPS)

A copyright must be granted automatically. Notwithstanding that the copyright must not be based upon any ‘formality’ such as a registration, a copyright holder may seek to document the date of a creation protected under copyright. Computer programs are regarded as ‘literary works’ under copyright law and receive the same terms of protection. National exceptions to copyright (such as ‘fair use’ in the United States) are constrained by the Berne three-step test. Thus, the copyright confers to authors of computer programs the exclusive right of authorizing the reproduction of these computer program, in any manner or form. In certain special cases the reproduction of such software may be permitted, provided that such reproduction does not conflict with a normal exploitation of the work and does not unreasonably prejudice the legitimate interests of the author.

In the European Union a set of ten directives form a basis for national law to protect copyrights in accordance with TRIPS: The ‘Software Directive’ (Directive 2009/24/EC on the legal protection of computer programs on the legal protection of computer programs), the ‘Database Directive’ (Directive 96/9/EC on the legal protection of databases), and the ‘Enforcement Directive’ or ‘IPRED’ (Directive 2004/48/EC on the enforcement of intellectual property rights). (https://ec.europa.eu/digital-single-market/en/eu-copyright-legislation) AI-related collections of data, for example, for training the AI or parameter value collections that result from training can form subject-matter eligible for protection under copyright.

3.3. Patent

Notwithstanding laws that limit the freedom of actors in the market, for example, to counter a tendency of the market to efface itself, in principle, in the free world of a market economy everybody can do everything: manufacture, offer for sale, sell, export or import goods. A patent, however, is a right that the patentee can use to exclude others from manufacturing, offering for sale, selling, exporting or importing goods that have certain properties defined in the patent. Thus, patent rights run contrary to the principle of the market economy. Nonetheless, patents are granted, since it is believed that the prospect to obtain a temporary impediment of third parties in the marked can stimulate the innovator to share the innovation with the public. Thereby, innovation can spread more quickly. Thereby, it is hoped, economic growth is fostered.

Thus the patent has a double function: First and foremost, the patent is a publication of innovation, and second, the patent defines the properties of subject-matter of the exclusion right. The structure of a patent publication reflects its double function; one could say “Form follows function”.

FUNCTION

FORM

Disclosure of how to make / implement / use technique according to the invention.

Specification / Description / Drawings disclose embodiments according to the claimed subject-matter.

Definition of what patent owner can keep others from doing (offering, manufacturing, using, …), i.e., the properties of claimed subject-matter.

Claims (A method …, An apparatus …, A system …) define a ‘forbidden’ combination of elements / features.

Table 3: Form follows function – structure of a patent publication

TRIPS ensures an essentially universal approach to patents, i.e., in major jurisdictions similar requirements apply for an innovation to be a patentable invention. An invention must be (Art. 27.1 TRIPS):

  • New (inaccessible to the public before filing for patent);
  • Inventive (not obvious for the “skilled person” to derive before filing for patent);
  • Industrially applicable (useful);

and

  • In a field of technology.

In the European Patent Organisation, which is different and independent from the European Union, the European Patent Office (EPO) grants patents in accordance with the European Patent Convention (EPC) which, in turn, is consistent with TRIPS. The EPC stipulates that European patents shall be granted for any inventions in all fields of technology, provided that they are new, involve an inventive step and are susceptible of industrial application (Art. 52 (1) EPC). The reference to ‘fields of technology’ is a limitation. The EPC reflects this limitation in rules that govern the two essential functional portions of a patent: Regarding the claims, Rule 43 stipulates that the claims shall define the matter for which protection is sought in terms of the technical features of the invention. Claims shall contain a statement indicating those technical features which are necessary for the definition of the claimed subject-matter and also specify those technical features for which, in combination with the features necessary for the definition, protection is sought. Regarding the description, Rule 42 EPC stipulates that the description shall specify the technical field to which the invention relates and that the description shall disclose the invention in such terms that the technical problem and its solution can be understood.

The EPC contains a non-exhaustive blacklist of items that are not regarded as inventions and that consequently, to the extent to which a European patent application or European patent relates to such subject-matter or activities as such, are excluded from patentability (Art. 52 (2) and (3) EPC:

(a) discoveries, scientific theories and mathematical methods;

(b) aesthetic creations;

(c) schemes, rules and methods for performing mental acts, playing games or doing business, and programs for computers;

(d) presentations of information.

Subject-matter on the blacklist has in common that it lacks technical character. The EPC, in contrast, is strict in that patent protection is only conferred to technical subject-matter. 

It should be noted that there is no legal definition of ‘technical’. However, there is case law that expounds on the term ‘technical’. In a landmark decision, the German Federal Court of Justice (Bundesgerichtshof – BGH), also referred to as the German Supreme Court, attempted a definition of the meaning of ‘technical’ which seems to have been accepted beyond the borders of Germany: According to the German Supreme Court, a technical teaching is characterised in that controllable natural forces are methodically utilised to achieve a causally foreseeable result. (BGH, 27.03.1969 – X ZB 15/6 – Rote Taube)

According to the German Supreme Court, even if the teaching does not seek to achieve a causally foreseeable success, which is achieved by using controllable forces of nature directly without intervening human intellectual activity, an invention is of a technical nature, if it is characterised by knowledge based on technical considerations. (BGH, 17.10.2001 – X ZB 16/00 – Logikverifikation)

The German Supreme Court conceded that the definition of ‘technical’ is subject to change to the extent that natural forces become controllable. (BGH, 11.06.1991 – X ZB 24/89 – Chinesische Schriftzeichen) Nonetheless, according to the German Supreme Court, the term ‘technical’ is the only suitable criterion to delimit patentable inventions vis-à-vis other results of human activity that are not eligible for patent protection. (BGH, 22.06.1976 – X ZB 23/74 – Dispositionsprogramm)

4. Patenting software

An invention whose implementation involves the use of a computer, computer network or other programmable apparatus is a so-called computer-implemented invention (CII), also called software-invention. Computer-implemented inventions have one or more features (elements) which are realised wholly or partly by a computer program.

Software-related inventions are usually implemented by means of computer programs. Patent applications typically do not specify whether the implementation is in hardware, software or a mix of the two. A computer program, when executed by the computer, performs steps of a method. Therefore, a patent claim that defines a computer-implemented invention is akin to a method claim. However, while use of a method protected under a method claim may be difficult to prove, in particular, with respect to a territory where the patent is effective, use of a patent claim on a computer-implemented invention can be less difficult to show: a suspect software that, for example, is offered in the respective territory can perhaps be obtained and examined.

Artificial intelligence can be software. Frequently, AI-related innovation is computer-implemented. While programs for computers as such are excluded from patentability, computer-implemented inventions can be patentable. When it comes to protecting AI-related innovation under patent, a number of challenges can be identified:

Challenge (1): AI innovation should not be patented, since it is data and software and, therefore, cannot be patented. This challenge is familiar from protecting innovation related to conventional software. An answer can be provided by understanding what is patentable and what is not!

Challenge (2): AI-related innovation is always obvious, since AI is merely a collection of training data, a conventional optimisation process, and a set of data without concept that results from training! The optimisation may settle on a local optimum; hence, it is not always clear how an AI that is trained to perform a task can be altered and still perform this task. This challenge is particular to artificial intelligence and arises from the fact, frequently observed, that artificial intelligence can achieve a result without ever having produced any understanding that explains the result as a solution to a particular problem. If patents on AI-related inventions are to be granted, it is necessary to understand the process and criteria of how patent offices examine patent applications and grant patents on AI-inventions!

Challenge (3): Frequently, AI-related innovation manifests in products that could also have been developed otherwise. A patent as an exclusion right carries weight and value if its infringement can be spotted. Patenting AI-related innovation does not seem to make any sense where patent infringement cannot be observed.

Challenge (4): ‘AI-assisted’ inventions seem to be generated by artificial intelligence rather than any human. That raises the question: Who is the inventor? This question must be answered, since it seems fair to say that where there is no inventor, there cannot be any invention. An answer to this question is also of immediate practical importance, since patent law in many jurisdiction requires the inventor to be stated – and to be a natural person. Some jurisdictions have special employee invention law. So it matters if an invention made in a company setting can be attributed to a computer instead of an employee.

5. Does AI-related innovation pertain to a field of technology?

For the purposes of a discussion in the EPO in 2018 on patenting AI, three possible types of AI were defined:

  • ‘Core AI’, where the challenge is that it often relates to algorithms as such, which as mathematical methods are not patentable,
  • Trained models/machine learning, where claiming variations and ranges might be an issue,
  • AI applied as a tool in a technical field, defined via technical effects.

Patent offices schematically approach the assessment of patentability of an invention claimed in a patent application. As shown in FIG. 1, the schemes can vary from patent office to patent office; in particular, the examining approach taken by the US Patent and Trademark Office (USPTO) differs from the examining approach used in European Patent Office (EPO). 

 

FIG. 1 Examining approaches used in the USPTO and in the EPO

The differences are, however, not substantial and should lead to the same result.

Considering the statutory blacklist, not all inventions are patentable; for example, computer programs, mathematical methods, algorithms, rules and methods for performing mental acts (are excluded if claimed ‘as such’) (Art 52(2)(3) EPC). Per EPO Guidelines ‘as such’ includes activities which represent abstract concepts devoid of any technical implications. In addition, case law requires the claimed subject-matter to be repeatable (BGH, 27.03.1969 – X ZB 15/6 – Rote Taube).

AI-related subject-matter can seem to be on the blacklist of non-patentable innovations: a mathematical method, a scheme, a rule, a method for performing mental acts, a program for computers, a presentation of information, or not repeatable. For example, AI can be classified as a computer program or algorithm. In that case, the European Patent Office uses the so-called 2-Hurdle test to examine the European patent application.

The first hurdle concerns patentability. The first question to be answered is: Does the claimed subject-matter have technical character? Per case law T 258/03 (Auction method/HITACHI) of 21.4.2004, technical character results either from the physical features of an entity or (for a method) from the use of technical means. In fact, a single technical claim feature lends technical character to the claimed subject-matter. A claim feature is technical if the claim feature has a technical effect.

If the answer to the first question is ‘No, the claimed subject-matter does not have technical character’, then the claimed subject-matter, being stuff from blacklist ‘as such’, is not patentable; otherwise, the claimed subject-matter takes the first hurdle.

The second hurdle is the establishment of an inventive step. The second question to be answered is: Do the technical means that lend technical character to the claimed subject-matter solve a technical problem in a non-obvious way? So the EPO not only determines, if a technical problem is solved by technical means, but also determines, if the solution is based on an inventive step. For instance, an applicant seeking to gain protection over an algorithm will have to prove that it has contributed to technical analysis, but most importantly, that the algorithm has pointed to a specific direction.

In sum, in order to pass the 2-hurdle test, a necessary condition is that at least one claim element has a technical effect for the claimed subject-matter to have technical character. However, a sufficient condition is that a combination of those claim features that have a technical effect is based on inventive step, i.e., that this combination of claim elements is not obvious vis-à-vis the closest prior art.

6. Assessment of inventive activity underlying AI-related inventions
6.1. EPO’s problem-solution approach to assessment of non-obviousness

In the assessment of inventive step, the EPO applies the so-called problem-solution approach. Having regard to AI-related claimed subject-matter, the test comes with a twist in that only those claim features are considered in the problem-solution approach that were found to have technical effect.

When applying the problem-solution approach, the EPO first identifies one document or other piece of prior art, typically in the field of technology that the claimed subject-matter pertains to, that shares a common purpose with the claimed subject-matter and that the EPO therefore considers to be particularly close to the claimed subject-matter. In light of the ‘closest prior art’, the EPO defines an ‘objective technical problem’ solved by the combination of technical features of the claimed subject-matter. While the ‘common purpose’ is more or less expressly stated a priori in both, the prior art and in the patent application under examination, the ‘objective technical problem’ is derived ex post.

6.2. EPO’s skilled person in the assessment of non-obviousness

Then, the EPO defines a fictitious character, the person of ordinary skill in the art, whose supposed capabilities, actions and limitations are contemplated in the assessment of inventive step. This skilled person has textbook knowledge, knows standards relevant in the technical field, uses typical tools available for work in the technical field, uses reference books and sometimes is rather a typical team in the technical field than an individual. The skilled person is always interested in an optimisation of the familiar technology and will be ready to apply trial and error to achieve an improvement.

6.3. EPO’s Could-would test in the assessment of non-obviousness

In the assessment of inventive step or non-obviousness of the claimed subject-matter, the EPO uses the so-called ‘Could-would test’ to assess inventive step.

First, the EPO examines if the person skilled in the art could have come to the combination of technical features of the claimed subject-matter. If that is not the case, i.e., if the skilled person could not have come to the combination of technical features in the examined patent claim, then the claimed subject-matter is found to involve inventive step.

Otherwise, the EPO further examines if the person skilled in the art not only could, but also would have come to the combination of technical features of the claimed subject-matter. If that is not the case, i.e., if the skilled person would not have come to the combination of technical features in the examined patent claim, then the claimed subject-matter is also found to involve inventive step.

6.4. Patenting Software in the EPO

The EPO has constantly pursued a practice that is consistent with the criteria for patenting in general and patenting software in particular. In retrospect, though, there has been a struggle to get the examination logic right and, accordingly, a development of the practice. Three phases can be distinguished, wherein each phase is associated with an individual approach: Contribution approach, Further-technical-effect approach and Hitachi-Comvik approach. (Yannis Skoulikaris, ‘Patenting Software-related Inventions according to the European Patent Convention‘, 2013) In order to understand the approach that has been used throughout the last fifteen years and to anticipate examination results with regards to presently drafted patent claims, it helps to understand the previously used approaches.

6.4.1. Contribution approach

According to the Contribution approach, the Examiner identified a ‘delta’ between the claimed subject-matter and the closest prior art. Based on the ‘delta’, the Examiner established the objective technical problem solved by the claim features of the delta. If that problem was on the blacklist of excluded subject-matter, then the claimed subject-matter was found not to be patentable.

In the decision T 0769/92 of 31 May 1994 (General purpose management system/ SOHEI) the EPO’s Board of Appeal signaled that, consistent with the principle that an exceptional provision should be interpreted narrowly, any technical consideration in the solution of the problem was sufficient to lend technical character to an invention:

“…., if technical considerations concerning particulars of the solution of the problem the invention solves are required in order to carry out that same invention”, then such technical considerations “lend a technical nature to the invention in that they imply a technical problem to be solved by (implicit) technical features”.

Thus, according to the Board, where technical problems were solved by the claimed invention, it simply was not right to exclude the claimed invention from patenting. However, at the time, the Boards of Appeal seem not yet to have developed a differentiated understanding of how to distinguish one from the other, i.e., a technical invention from a non-technical invention, below the level of the technical character that was somehow found in the claimed subject-matter as a combination of claim features. In particular, the Boards of Appeal did not understand at which point in the examination to ask which question in order to differentiate a patentable invention from a non-patentable invention. Therefore, when trying to establish technical character, the Board asked questions related to claimed subject-matter vis-à-vis the prior art, i.e., questions that concerned novelty and inventive step.

6.4.2. Further technical effect approach

The Further technical effect approach was based on a recognition of the deficiency. In T 1173/97 of 1 July 1998 (Computer program product/IBM) the Board confirmed that a computer program product was not excluded from patentability under Article 52(2) and (3) EPC, i.e., the EPC’s blacklist of non-technical inventions, if, when it was run on a computer, it produced a further technical effect which went beyond the ‘normal’ physical interactions between program (software) and computer (hardware).

6.4.3. Comvik-Hitachi approach

Knowing that a claimed subject-matter was not per se excluded from patentability for lack of technical character left the problem of how to determine, if the claimed subject-matter was meeting the requirement of inventive step. In two landmark decisions the EPO’s Boards of Appeal developed a procedure that the EPO applied ever since. In particular, patent applications on inventions related to artificial intelligence are likely to fall in a category of what the EPO calls ‘mixed inventions’ that are defined by both, features that have technical effect and features that fail to provide technical effect.

In T 0641/00 of 26 September 2002 (Two identities/ COMVIK) the Board found that an invention consisting of a mixture of technical and non-technical features and having technical character as a whole is to be assessed with respect to the requirement of inventive step by taking account of all those features which contribute to said technical character whereas features making no such contribution cannot support the presence of inventive step.

In the context of the problem-solution approach applied to assess the inventive step, it was also important to know how, in the case of mixed inventions, to define the objective technical problem underlying the claimed invention. In typical claimed subject-matter where all features have technical effect, the technical problem to be solved should not be formulated to contain pointers to the solution or partially anticipate it. According to the Board, where the claim refers to an aim to be achieved in a non-technical field, this aim may legitimately appear in the formulation of the problem as part of the framework of the technical problem that is to be solved, in particular as a constraint that has to be met.

In T 0258/03 of 12 April 2004 (Auction method/ HITACHI) the Board made it clear that a method involving technical means is an invention within the meaning of Art. 52(1) EPC; the clarification meant, in fact, a change vis-à-vis a previous case law T 931/95 (Controlling pension benefits system/ PBS PARTNERSHIP). However, according to the Board, method steps consisting of modifications to a business scheme and aimed at circumventing a technical problem rather than solving it by technical means cannot contribute to the technical character of the subject-matter claimed.

6.5. Do AI-related inventions involve inventive activity?

6.5.1. ‘Core AI’

Though core AI often relates to algorithms as such, core AI can still be patentable. The patent drafter needs to write the patent application such that sufficient non-obvious technical effect can be established.

 

6.5.2. Machine learning techniques and trained models

Machine learning techniques and trained models can be patentable. The patent drafter needs to write the patent application such that sufficient non-obvious technical effect can be established. Claiming variations and ranges may be an issue. To be patentable, the invention must be repeatable. The challenge is to state claim features in such terms that the claimed subject-matter is repeatable, i.e., independent from ‘settings’ such as initial values of an iterative learning process. The question must be answered, however, if a trained AI as result of learning can be patentable, since the learning process amounts to an optimisation that a person skilled in the art would typically pursue.

6.5.3. AI applied as a tool in a technical field, defined via technical effects

We use AI to optimise a machine. The skilled person always tries to optimise technology. The skilled person uses available instruments at his/her disposal. At which point do we do something beyond the obvious? How do we know to have passed that point? AI applied as a tool in a technical field frequently is defined via technical effects. The resultant product may well be patentable. The question must be answered, however, if the person skilled in the art would not as a matter of course have contemplated to solve the technical problem underlying the solution according to the invention by means of artificial intelligence.

7. Which AI-related invention should be patented?

Frequently, AI-related innovation manifests in products that could also have been developed otherwise. A patent as an exclusion right carries weight and value if its infringement can be spotted. Patenting AI-related innovation does not seem to make any sense where patent infringement cannot be observed. Further, AI is frequently used in a distributed fashion, i.e., with several IT-systems co-operating that may be located in different jurisdictions and that may be operated under different ownership. It may therefore be difficult to identify a single person or legal entity to be held responsible as patent infringer for infringement of a patent that protects an AI-related invention. Where the AI-related invention is protected under a method claim, proof of infringement may be particularly difficult since, typically, AI shows a ‘non-linear’ behaviour which appears rather complex since a link between cause and effect cannot easily be identified. However, the difficulty of proving infringement of a protected method rises with the complexity of the protected method.

8. Who invents AI-related inventions?
8.1.

Artificial intelligence leads some people to wonder, if, one day, AI will invent. Let there be no doubt: There cannot be any invention without a natural person as an inventor! Where there is no natural person as an inventor, there is no invention. It seems that no jurisdiction of major importance considers an invention could not have an inventor. With that being said, German law, up to sometime into World War II, allowed a company to be inventor. That view would be completely inconsistent with current understanding of what an invention is. Below, the current understanding as reflected exemplarly in the European Patent Convention (EPC) and German law shall be recited. It can be seen that an invention originates with a natural person, the inventor.

 

8.1.1.

According to the EPC, the right to a European patent belongs to the inventor (Art. 60 EPC).

(1) The right to a European patent shall belong to the inventor or his successor in title. If the inventor is an employee, the right to a European patent shall be determined in accordance with the law of the State in which the employee is mainly employed; …

(2) If two or more persons have made an invention independently of each other, the right to a European patent therefor shall belong to the person whose European patent application has the earliest date of filing, provided that this first application has been published.

(3) …

 

8.1.2.

According to German law, the right to a German patent belongs to the inventor

According to German Patent Act Section 6, the right to a patent shall belong to the inventor or his successor in title. If two or more persons have jointly made an invention, the right to the patent shall belong to them jointly. As in the EPC cited above, in order to avoid double patenting, there is an exception to this right, if two or more persons have made the invention independently of each other. In that case, the right shall belong to the person who is the first to file the application in respect of the invention with the German Patent and Trade Mark Office.

According to German Employee Inventions Act Section 6 on the claiming of an employee invention by the employer can claim an invention from the employer. This means that, at first, the invention is owned by the employee.

(1) The employer may claim a service invention by declaration to the employee.

(2) The claim shall be deemed declared if the employer does not release the service invention by declaration in text form within four months after receipt of the proper notification (Section 5 (2) sentence 1 and 3).

In order for the employer to learn about the employee’s invention, the German Employee Inventions Act Section 5 obliges the employee to notify the employer about the invention:

(1) The employee who has made a service invention is obliged to notify the employer separately in text form without delay, stating that it is a declaration of an invention. If several employees are involved …

(2) In the notification the employee shall describe the technical task, its solution and the establishment of the service invention. ….

8.2.

From patent law’s point of view: What is an invention as opposed to an embodiment? For the following discussion, it will be good to refresh the discussion of the double function of the patent as an exclusion right and as a publication of innovation (see above, Section 3.3.).

8.2.1.

Typically, the inventor arrives at the invention in the context of a particular development task that is product-related. While the solution presented by the inventor reflects a concept underlying the invention, frequently, the inventor is not aware of the concept.

8.2.2.

The task of a patent attorney is to elaborate and define the concept in one or more independent patent claims that each are meant to constitute a right to exclude third parties from doing. For example, the EPC stipulates the form and content of claims (Rule 43 EPC):

(1) The claims shall define the matter for which protection is sought in terms of the technical features of the invention. Wherever appropriate, claims shall contain:

(a) a statement indicating the designation of the subject-matter of the invention and those technical features which are necessary for the definition of the claimed subject-matter but which, in combination, form part of the prior art;

(b) a characterising portion, beginning with the expression ‘characterised in that’ or ‘characterised by’ and specifying the technical features for which, in combination with the features stated under sub-paragraph (a), protection is sought.

(2) … may contain more than one independent claim … :

(a) a plurality of interrelated products,

(b) different uses of a product or apparatus,

(c) alternative solutions to a particular problem, …

8.2.3.

Further, the patent application has to disclose how the invention works. Typically, results of the concrete development work will be described in terms of so-called embodiments according to the inventions in numerous variations in the description, also called specification. For example, the EPC stipulates the content of the description (Rule 42 EPC)

(1) The description shall:

(a) specify the technical field to which the invention relates;

(b) … background art …

(c) disclose the invention, as claimed, in such terms that the technical problem, …, and its solution can be understood, and … effects …;

(d) briefly describe the figures in the drawings, if any;

(e) describe … way of carrying out the invention claimed, … examples …;

(f) indicate … industrially applicable.

8.3. Does AI invent? If not AI, who is the inventor? 

Does AI invent? It is conceivable that artificial intelligence could formulate a patent claim. For example, one could think of a system that analyses patent publications to identify solutions that have been overlooked in the past in order to compose a new solution. However, would this AI qualify as an inventor? The answer must, under all circumstances, be ‘No’, since the AI is not a natural person, but merely a tool available to the natural person.

Assuming the claimed invention to have technical character and to be novel, the invention can be found patentable or not. Below, we state considerations that may lead to the respective finding.

The invention is not patentable: In light of the common availablity of advanced AI, applying AI to solve the problem is obvious. In effect, as a result of common availability of AI, the capabilities of the person skilled in the art are much enhanced by the AI. Accordingly, the solution found by the AI is obvious, i.e., the solution does not involve inventive step.

The invention is patentable: While the person skilled in the art may have contemplated the use of AI to solve his or her problem, a human thought process that went on outside the AI’s process of solving the objective problem provided inventive step. For example, such human thought process could be:

  • definition of the problem to be solved,
  • application of AI to the task perhaps despite some teaching to the contrary,
  • choice of AI technique that was applied to the task,
  • preparation of the AI for solving the problem, e.g., the design of a model,
  • collection of AI training data,
  • selection of AI training process,
  • selection of claimed result(s) from a wealth of results produced by AI.

In light of the above considerations, it seems likely that in some fields of technology that are amenable to the use of AI, it will become more difficult to obtain patent protection. In particular, it seems that the knowledge of the person skilled in the art will be expanded to include use of artificial intelligence in the development of solutions to technical problems. 

9. Conclusions

9.1. Artificial Intelligence

There are no intelligent computers; there are only computers that are intelligently programmed. There is not ‘one’ definition of artificial intelligence. AI can be understood as a special type of software. It is special, since it can result from an self-adaptive optimisation process.

 

9.2. Instruments for protecting software

In principle, software can be protected under copyright law, as a trade secret and by patent.

 

9.3. Patenting software

Patents are granted for inventions that have a technical effect. Accordingly, when an invention is examined in the patent office, features that do not have any technical effect will likely be ignored.

 

9.4. Does AI-related innovation pertain to a field of technology?

If the AI-related innovation encompasses hardware then it belongs to a field of technology.

 

9.5. Do AI-related inventions involve inventive activity?

If those features of the AI-related innovation that contribute to a technical effect of the innovation are not obvious in light of any prior publication then the AI-related innovation will likely be found to involve inventive activity.

 

9.6. Who invents AI-related inventions?

Only humans invent. AI-related inventions are therefore invented by humans. Use of AI in an innovative process will likely be seen as use of a typical tool available to the person skilled in the art. An invention may rather be found in how the problem was put that is to be solved with the assistance of AI, or how the AI was used in order to assist in solving the problem.

 

9.7. Which AI-related invention should be patented?

Patents are exclusion rights. For an investment into patent protection to make sense, third-party use should be detectable. Therefore, if it seems difficult or even impossible to spot an unauthorised use of an invention protected under patent, the investment into patent protection may not make business sense. In particular, where the invention cannot be replicated by an interested third party using publically available information other than the patent publication, protection under trade secret may prove more effective than patent protection.

If the AI-related invention cannot be conceptionally captured in terms of features of a patent claim, perhaps because the concept is not recognized, i.e., if patent claim wording must be limited to describe an embodiment of the AI-related invention, the protective scope of that claim may be very limited. In that case also the investment into patent protection may not make much business sense