AR portrait photographing method and system based on deep learning
1. An AR portrait photographing method based on deep learning is characterized by comprising the following steps:
acquiring an image with a portrait, and performing human semantic segmentation on the image with the portrait through a semantic depth neural network to obtain a semantic mask;
depth estimation is carried out on the image with the portrait through a semantic depth neural network with depth branches to obtain the depth of the human body pixel, and the relative shielding relation between the portrait and the virtual content in the image is determined;
and carrying out corrosion expansion and Gaussian filtering processing on the semantic mask, and fusing the portrait and the virtual content with the determined relative shielding relation through guide filtering to obtain a fused image.
2. The method of claim 1, wherein performing human semantic segmentation on the image with the portrait through a semantic deep neural network to obtain a semantic mask comprises:
carrying out convolution calculation on the image through an encode module, and outputting to obtain a convolution parameter;
calculating the convolution parameters output by each layer of Block through a decode module, and outputting a mask reference;
and carrying out convolution operation on the mask reference through the convolution parameters to obtain the semantic mask.
3. The method of claim 2, wherein prior to performing the convolution calculation on the image by the encode module, the method comprises:
the size of the image is reduced by spacetadepth, and the spatial resolution of the image is changed into the number of channels.
4. The method of claim 2, wherein the calculating the convolution parameter of each layer of block output by the decode module, and outputting the mask reference further comprises:
and performing up-sampling on the convolution parameters output by each layer of Block through FPN to obtain characteristic map information.
5. An AR portrait photographing system based on deep learning, the system comprising:
the semantic segmentation module is used for acquiring an image with a portrait, and performing human semantic segmentation on the image with the portrait through a semantic deep neural network to obtain a semantic mask;
the depth estimation module is used for carrying out depth estimation on the image with the portrait through a semantic depth neural network with depth branches to obtain the depth of the portrait and determine the relative shielding relation between the portrait and the virtual content in the image;
and the image fusion module is used for carrying out corrosion expansion and Gaussian filtering processing on the semantic mask and fusing the portrait and the virtual content with the determined relative shielding relation through guide filtering to obtain a fusion image.
6. The system of claim 5,
the semantic segmentation module is also used for carrying out convolution calculation on the image through the encode module and outputting to obtain a convolution parameter,
calculating the convolution parameter output by each layer of Block through a decode module, outputting a mask reference,
and carrying out convolution operation on the mask reference through the convolution parameters to obtain the semantic mask.
7. The system of claim 6, further comprising an image processing module that, prior to performing a convolution calculation on the image by an encode module,
the image processing module is used for reducing the size of the image through spacedapth and changing the spatial resolution of the image into the channel number.
8. The system of claim 6,
and the semantic segmentation module is also used for performing up-sampling on the convolution parameters output by each layer of Block through the FPN to obtain characteristic map information.
9. An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to perform the deep learning based AR portrait photographing method according to any one of claims 1 to 4.
10. A storage medium having stored therein a computer program, wherein the computer program is configured to execute the method for photographing AR portrait based on deep learning of any one of claims 1 to 4 when running.
Background
The augmented reality technology is a new technology for seamlessly integrating real world information and virtual world information, and is characterized in that entity information which is difficult to experience in a certain time and space range of the real world originally, such as visual information, sound, taste, touch and the like, is overlapped after being simulated through scientific technologies such as computers and the like, virtual information is applied to the real world and is perceived by human senses, and therefore the sensory experience beyond the reality is achieved. However, for large-scene augmented reality or spatial-level augmented reality experiences, virtual content typically covers a large range of real scenes, such as entire buildings covered by virtual content, in which case when a person stands in front of a building to take a photo group, the person is hidden by the virtual content, thereby affecting the user experience.
In the related art, the photographing method based on augmented reality includes: according to the type of the current scene, triggering and starting an augmented reality function to perform augmented reality processing on the preview image, and performing special processing on the enhancement of the object, but the problem that virtual content shields people is not considered; the method of adding the image and the sound effect as the enhancement content into the augmented reality photo does not solve the problem of character occlusion; the method mainly comprises the step of generating the augmented reality picture through post-artificial processing, and the picture lacks the sense of reality.
At present, an effective solution is not provided aiming at the problem that in the related art, when a person in a virtual scene is photographed, the existing portrait is shielded by a virtual object, so that the user experience is poor.
Disclosure of Invention
The embodiment of the application provides an AR portrait photographing method and system based on deep learning, and aims to at least solve the problem that in the related art, when people in a virtual scene are photographed, existing portraits are shielded by virtual objects, and user experience is poor.
In a first aspect, an embodiment of the present application provides a method for taking a picture of an AR portrait based on deep learning, where the method includes:
acquiring an image with a portrait, and performing human semantic segmentation on the image with the portrait through a semantic depth neural network to obtain a semantic mask;
depth estimation is carried out on the image with the portrait through a semantic depth neural network with depth branches to obtain the depth of the human body pixel, and the relative shielding relation between the portrait and the virtual content in the image is determined;
and carrying out corrosion expansion and Gaussian filtering processing on the semantic mask, and fusing the portrait and the virtual content with the determined relative shielding relation through guide filtering to obtain a fused image.
In some embodiments, performing human semantic segmentation on the image with the portrait through a semantic deep neural network to obtain a semantic mask includes:
carrying out convolution calculation on the image through an encode module, and outputting to obtain a convolution parameter;
calculating the convolution parameters output by each layer of Block through a decode module, and outputting a mask reference;
and carrying out convolution operation on the mask reference through the convolution parameters to obtain the semantic mask.
In some of these embodiments, prior to performing the convolution calculation on the image by the encode module, the method includes:
the size of the image is reduced by spacetadepth, and the spatial resolution of the image is changed into the number of channels.
In some embodiments, the calculating, by the decode module, the convolution parameter output by each layer of Block further includes:
and performing up-sampling on the convolution parameters output by each layer of Block through FPN to obtain characteristic map information.
In a second aspect, an embodiment of the present application provides an AR portrait photographing system based on deep learning, the system includes:
the semantic segmentation module is used for acquiring an image with a portrait, and performing human semantic segmentation on the image with the portrait through a semantic deep neural network to obtain a semantic mask;
the depth estimation module is used for carrying out depth estimation on the image with the portrait through a semantic depth neural network with depth branches to obtain the depth of the portrait and determine the relative shielding relation between the portrait and the virtual content in the image;
and the image fusion module is used for carrying out corrosion expansion and Gaussian filtering processing on the semantic mask and fusing the portrait and the virtual content with the determined relative shielding relation through guide filtering to obtain a fusion image.
In some embodiments, the semantic segmentation module is further configured to perform convolution calculation on the image through an encode module, and output a convolution parameter,
calculating the convolution parameter output by each layer of Block through a decode module, outputting a mask reference,
and carrying out convolution operation on the mask reference through the convolution parameters to obtain the semantic mask.
In some of these embodiments, the system further includes an image processing module that, prior to performing a convolution calculation on the image by the encode module,
the image processing module is used for reducing the size of the image through spacedapth and changing the spatial resolution of the image into the channel number.
In some embodiments, the semantic segmentation module is further configured to perform upsampling on the convolution parameter output by each layer of block through the FPN to obtain the feature map information.
In a third aspect, an embodiment of the present application provides an electronic apparatus, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the processor implements the method for photographing an AR portrait based on deep learning according to the first aspect.
In a fourth aspect, the present application provides a storage medium, on which a computer program is stored, where the program is executed by a processor to implement the method for photographing AR portrait based on deep learning as described in the first aspect.
Compared with the related technology, the AR portrait photographing method based on deep learning provided by the embodiment of the application obtains the image with the portrait, and performs human semantic segmentation on the image with the portrait through the semantic deep neural network to obtain the semantic mask; then, depth estimation is carried out on the image with the portrait through a semantic depth neural network with depth branches to obtain the depth of the human body pixel, and the relative shielding relation between the portrait and the virtual content in the image is determined; and finally, carrying out corrosion expansion and Gaussian filtering processing on the semantic mask, and fusing the portrait and the virtual content with the determined relative shielding relation through guide filtering to obtain a fused image.
Compared with the portrait photographing method for augmented reality experience in a large scene, the portrait photographing method has the advantages that due to the fact that the virtual scene occupies a large proportion, the portrait is easy to block, and photographing experience is poor. According to the method, the human body semantics are segmented through the depth neural network, the human body semantics mask in the image is obtained, then the virtual content and the portrait photo are fused in a targeted manner, the depths of the portrait pixel and the background pixel are obtained through calculation through a depth estimation method, and therefore the relative shielding relation between the portrait and the virtual content is determined, the more flexible and real photo is obtained, the image quality is improved, and the user experience is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic application environment diagram of an AR portrait photographing method based on deep learning according to an embodiment of the present application;
FIG. 2 is a flowchart of an AR portrait photographing method based on deep learning according to an embodiment of the present application;
FIG. 3 is a schematic structural diagram of a semantic deep neural network according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a semantic deep neural network with deep branches according to an embodiment of the present application;
FIG. 5 is a block diagram of an AR portrait photographing system based on deep learning according to an embodiment of the present application;
fig. 6 is an internal structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference herein to "a plurality" means greater than or equal to two. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The AR portrait photographing method based on deep learning provided by the application can be applied to an application environment shown in FIG. 1, and FIG. 1 is an application environment schematic diagram of the AR portrait photographing method based on deep learning according to the embodiment of the application, and is shown in FIG. 1. The terminal 11 and the server 10 communicate with each other via a network. The server 10 acquires an image with a portrait, and performs human semantic segmentation on the image with the portrait through a semantic deep neural network to obtain a semantic mask; then, depth estimation is carried out on the image with the portrait through a semantic depth neural network with depth branches to obtain the depth of the human body pixel, and the relative shielding relation between the portrait and the virtual content in the image is determined; and finally, carrying out corrosion expansion and Gaussian filtering processing on the semantic mask, fusing the portrait photo and the virtual content with the determined relative shielding relation through guide filtering to obtain a fused image, and displaying the fused image on the terminal 11. The terminal 11 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, portable wearable devices, cameras, and the like, and the server 10 may be implemented by an independent server or a server cluster formed by a plurality of servers. Specifically, the portrait semantic segmentation and the depth estimation in the embodiment of the present application may be processed at the mobile terminal, or an image taken by a camera may be uploaded to a server, and the calculation processing may be performed on the server.
The embodiment provides an AR portrait photographing method based on deep learning, fig. 2 is a flowchart of the AR portrait photographing method based on deep learning according to the embodiment of the present application, and as shown in fig. 2, the flowchart includes the following steps:
step S201, acquiring an image with a portrait, and performing human semantic segmentation on the image with the portrait through a semantic deep neural network to obtain a semantic mask;
preferably, in this embodiment, the semantic deep neural network for human body semantic segmentation is divided into two modules, which are an encode module and a decode module respectively. Fig. 3 is a schematic structural diagram of a semantic deep neural network according to an embodiment of the present application, and as shown in fig. 3, an encode module includes Block1-4, which adopts a modified version of mobilenetV3 as a backbone.
Optionally, in order to reduce the amount of computation, before performing convolution computation on the input image by the encode module, the RGB input image with the size of 1 × 3 × 512 is first changed into an image with the size of 1 × 48 × 128 by a spacedapth operation, wherein the spacedapth operation also changes the spatial resolution into the number of channels, so that the amount of computation of the network can be reduced and the size of the network re-input image can be ensured to be large enough to achieve the best segmentation effect of the pixel-level image.
Further, as shown in fig. 3, the image after size reduction is input into Block1 in the encode module for convolution calculation, Block1 adopts convolution of 1X1, the number of input channels is 48, the number of output channels is 40, and downsampling is not performed, and then a batchnorm and an hswish activation function are followed; then, the images are respectively entered into blocks 2, blocks 3 and blocks 4 of the mobilenetV3 structure for convolution calculation, and feature maps with sizes of 64 × 64, 32 × 32 and 16 × 16 are obtained.
After obtaining the convolution parameters output by each layer of Block, calculating the convolution parameters output by each layer of Block through a decode module, and outputting a mask reference, specifically, as shown in fig. 3, an output4 of Block4 enters decode4 for calculation, then performs concat connection with an output of Block3, then performs decode3 calculation, and so on until Block2 finishes calculation, outputs the convolution parameters, and enters a prototype network branch. Preferably, in this embodiment, the decode performs upsampling on the convolution parameter output by each layer of Block through FPN to obtain feature map information, so that multi-scale information can be fully utilized. It should be noted that the semantic deep neural network has two different network branches, one is a weight branch for outputting parameters of convolution, and the other is a prototype branch for referencing a mask.
And finally, carrying out convolution calculation on the mask reference of the prototype branch through the convolution parameter output by the weight branch to obtain the final semantic segmentation mask. The weight branch is directly obtained by calculating the branch output of Block4, specifically, the output parameters of Block4 are subjected to global average pooling and connected with a full-link layer, the output dimension is 91, and 91 convolution parameters are output; as shown in fig. 3, the Prototype branch firstly performs convolution with a kernel of 3 on the output of decode2, converts the channel of the convolutional codes from 24 to 6, then arranges 91 convolution parameters output by the weight branch into 3 parameters of 1X1 convolution to obtain 6 convolution kernels of 1X6, adds 6 bias, and outputs a feature map with the channel number of 1 after convolution with 3 parameters of 1X1, that is, the final semantic mask.
It should be noted that, in the present embodiment, for training of the semantic deep neural network, the Loss function is shown in the following equation 1:
diceloss*0.1+0.8*BinaryFocalLoss+0.1*JaccardLoss(1)
step S202, depth estimation is carried out on the image with the portrait through a semantic depth neural network with depth branches to obtain the depth of the human body pixel, and the relative shielding relation between the portrait and the virtual content in the image is determined;
preferably, the depth estimation network in this embodiment can be used as a network branch on the basis of a semantically segmented network. Fig. 4 is a schematic structural diagram of a semantic depth neural network with depth branches according to an embodiment of the present application, and as shown in fig. 4, the depth estimation network is obtained by adding two convolution layers of 3X3 to an output result of a semantic segmentation network decode module, and through calculation of the depth estimation network branches, a single-channel depth estimation value can be obtained, so that depths of a human body pixel and a background pixel are obtained through calculation, and a relative occlusion relationship between a human body pixel and a background is distinguished. It should be noted that the loss function adopted by the depth estimation network branch in the present embodiment is smoothL 1.
In a large-space augmented reality experience, the content may be diverse, with different content at different depths. In order to solve the occlusion relationship between the virtual content and the human body, it is necessary to estimate the depth values of the human image and the background so as to determine whether the virtual background is in front of the human image or behind the human image, and to block the human image for an object in front of the human image and to block the human image for an object behind the human image to become the background. Specifically, in the embodiment, the depth estimation is performed on the image with the portrait through the semantic depth neural network with the depth branch to obtain the depths of the portrait and the background pixel, and the relative shielding relationship between the portrait and the background is determined by comparing the depths of the portrait and the background;
step S203, carrying out corrosion expansion and Gaussian filtering processing on the semantic mask, and fusing the portrait and the virtual content with the determined relative shielding relation through guide filtering to obtain a fused image;
in this embodiment, since the semantic mask is a binary image, for example, the portrait may be set to 1, and the background is set to 0, under the condition that the semantic mask estimation is not very accurate, if the mask value is directly used to fuse the portrait and the background, there is a very obvious split feeling, so that morphological operations such as erosion and expansion need to be performed on the mask, then gaussian filtering processing is performed, and finally, guided filtering is adopted to fuse the portrait and the virtual content, which have determined a relative occlusion relationship, so as to achieve an effect of natural edge transition, and finally, a fine and beautiful synthetic picture is obtained.
Through the steps S201 to S203, in the embodiment of the application, human semantics are segmented through a depth neural network, a human semantic mask in an image is obtained, depths of human body pixels and background pixels are calculated through a depth estimation method, so that a relative shielding relationship between a human figure and a background is determined, and finally, virtual content and a virtual human figure are fused in a targeted manner, so that a more flexible and real photo is obtained, the problem that when a person in a virtual scene is photographed, the existing human figure is shielded by a virtual object, so that user experience is poor is solved, image quality is improved, and user experience is improved.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here.
The embodiment also provides an AR portrait photographing system based on deep learning, which is used for implementing the above embodiments and preferred embodiments, and the description of the system is omitted. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 5 is a block diagram of a structure of an AR portrait photographing system based on deep learning according to an embodiment of the present application, and as shown in fig. 5, the system includes a semantic segmentation module 51, a depth estimation module 52, and an image fusion module 53:
the semantic segmentation module 51 is configured to obtain an image with a portrait, and perform human semantic segmentation on the image with the portrait through a semantic deep neural network to obtain a semantic mask; the depth estimation module 52 is configured to perform depth estimation on the image with the portrait through a semantic depth neural network with depth branches to obtain the depth of the human body pixel, and determine a relative occlusion relationship between the portrait and the virtual content in the image; and the image fusion module 53 is configured to perform erosion expansion and gaussian filtering on the semantic mask, and fuse the portrait and the virtual content, of which the relative shielding relationship has been determined, through guided filtering to obtain a fusion image.
Through the system, human semantics are segmented through a deep neural network in a semantic segmentation module 51 to obtain a semantic mask of a human body in an image, then the depths of human body pixels and background pixels are calculated through a depth estimation method of a depth estimation module 52 to determine the relative shielding relationship between the portrait and the background, and finally, virtual content and the virtual portrait are fused in a targeted manner through an image fusion module 53 to obtain a more flexible and real photo.
It should be noted that, for specific examples in other embodiments in the present application, reference may be made to examples described in the embodiment and the optional implementation manner of the above augmented reality-based portrait photographing method, and details of this embodiment are not repeated herein.
Note that each of the modules may be a functional module or a program module, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
The present embodiment also provides an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
In addition, in combination with the method for photographing an AR portrait based on deep learning in the above embodiments, the embodiments of the present application may provide a storage medium to implement. The storage medium having stored thereon a computer program; the computer program is executed by a processor to implement any one of the above-mentioned embodiments of the method for photographing AR portrait based on deep learning.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method for photographing an AR portrait based on deep learning. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
In an embodiment, fig. 6 is a schematic internal structure diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 6, there is provided an electronic device, which may be a server, and its internal structure diagram may be as shown in fig. 6. The electronic device comprises a processor, a network interface, an internal memory and a non-volatile memory connected by an internal bus, wherein the non-volatile memory stores an operating system, a computer program and a database. The processor is used for providing calculation and control capabilities, the network interface is used for communicating with an external terminal through network connection, the internal memory is used for providing an environment for an operating system and running of a computer program, the computer program is executed by the processor to realize the AR portrait photographing method based on deep learning, and the database is used for storing data.
Those skilled in the art will appreciate that the configuration shown in fig. 6 is a block diagram of only a portion of the configuration associated with the present application, and does not constitute a limitation on the electronic device to which the present application is applied, and a particular electronic device may include more or less components than those shown in the drawings, or may combine certain components, or have a different arrangement of components.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It should be understood by those skilled in the art that various features of the above-described embodiments can be combined in any combination, and for the sake of brevity, all possible combinations of features in the above-described embodiments are not described in detail, but rather, all combinations of features which are not inconsistent with each other should be construed as being within the scope of the present disclosure.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.