Target object identification method and device, computer equipment and storage medium
1. A target object recognition method, comprising:
acquiring a target object aiming at a target service, wherein the target object comprises at least one of target object basic attribute information, target equipment basic attribute information and target network connection attribute information;
performing characterization processing on the target object to obtain target object characteristics, wherein the target object characteristics and the target object have a corresponding relation;
determining a first interestingness characteristic and a second interestingness characteristic after characteristic processing is carried out on the basis of the target object characteristic, wherein the characteristic point score of the first interestingness characteristic is smaller than a first threshold value, the characteristic point score of the second interestingness characteristic is larger than the first threshold value, and the characteristic point score indicates the importance degree of the characteristic;
carrying out noise adding processing on the first interestingness characteristic to obtain a first noise adding characteristic;
based on the second interestingness feature and the first noise adding feature, obtaining an interest point classification result corresponding to a target object through a target classification model;
and determining the interest degree label of the target object according to the interest point classification result corresponding to the target object.
2. The method of claim 1, further comprising:
acquiring a service sample set aiming at the target service;
performing characterization processing on the service sample set to obtain a service sample characteristic set;
the determining a first interestingness feature and a second interestingness feature after feature processing based on the target object feature comprises:
and determining the first interestingness characteristic and the second interestingness characteristic after characteristic processing is carried out on the basis of the target object characteristic and the service sample characteristic set.
3. The method according to claim 1, wherein obtaining a classification result of an interest point corresponding to a target object through a target classification model based on the second interestingness feature and the first noisy feature comprises:
determining a third interestingness characteristic and a fourth interestingness characteristic after characteristic processing is carried out on the basis of the second interestingness characteristic and the first noise-adding characteristic, wherein the correlation degree between the third interestingness characteristic and another characteristic is smaller than a second threshold value, and the correlation degree between the fourth interestingness characteristic and another characteristic is larger than the second threshold value;
carrying out noise adding processing on the third interestingness characteristic to obtain a second noise adding characteristic;
and obtaining an interest point classification result corresponding to the target object through the target classification model based on the fourth interestingness feature and the second noise adding feature.
4. The method according to any one of claims 1 to 3, further comprising:
acquiring an object sample set aiming at the target service, wherein the object sample set comprises N object samples, each object sample corresponds to an interestingness label, and each object sample comprises at least one of object basic attribute information, equipment basic attribute information and network connection attribute information;
performing characterization processing on the object sample set to obtain an object sample feature set, wherein the object sample feature set comprises N object sample features, and the object sample features and the object samples have corresponding relations;
determining a first interestingness feature set and a second interestingness feature set based on the object sample feature set, wherein the first interestingness feature set comprises P object sample features with feature point scores smaller than a first threshold, the second interestingness feature set comprises Q object sample features with feature point scores larger than the first threshold, and P and Q are integers greater than or equal to 1;
carrying out noise adding processing on the first interestingness feature set to obtain a first noise adding feature set;
based on the second interestingness feature set and the first noise adding feature set, obtaining interest point classification results corresponding to the N object samples through a classification model to be trained;
and training a classification model to be trained according to the interest point classification results corresponding to the N object samples and the interest degree labels corresponding to the N object samples.
5. The method of claim 4, further comprising:
acquiring a service sample set aiming at the target service;
performing characterization processing on the service sample set to obtain a service sample characteristic set;
the determining a first interestingness feature set and a second interestingness feature set based on the object sample feature set comprises:
determining the first interestingness feature set and the second interestingness feature set based on the object sample feature set and the business sample feature set.
6. The method according to claim 4, wherein obtaining the interest point classification results corresponding to the N object samples through the classification model to be trained based on the second interestingness feature set and the first noisy feature set comprises:
determining a third interestingness feature set and a fourth interestingness feature set based on the second interestingness feature set and the first noisy feature set, wherein the relevance between object sample features in the third interestingness feature set is smaller than a second threshold, and the relevance between object sample features in the fourth interestingness feature set is larger than the second threshold;
carrying out noise adding processing on the third interestingness feature set to obtain a second noise adding feature set;
and obtaining interest point classification results corresponding to the N object samples through a classification model to be trained based on the fourth interestingness feature set and the second noisy feature set.
7. The method of claim 4, wherein obtaining a sample set of objects for a target service comprises:
acquiring an initial object sample set for the target service;
determining a preset threshold range based on the target service;
determining the N object samples from an initial object sample set of the target service based on the preset threshold range.
8. The method of claim 5, wherein determining the first set of interestingness features and the second set of interestingness features based on the set of object sample features and the set of business sample features comprises:
aggregating the object sample feature set and the service sample feature set based on a plurality of preset time periods to obtain a fifth interest degree feature set;
performing feature processing on the fifth interestingness feature set to obtain a sixth interestingness feature, wherein the feature processing comprises at least one of normalization feature processing and discretization feature processing;
determining the first set of interestingness features and the second set of interestingness features based on the sixth interestingness feature.
9. The method of claim 8, wherein determining the first set of interestingness features and the second set of interestingness features based on the sixth interestingness feature comprises:
performing dimensionality reduction processing on the sixth interestingness characteristic to obtain a first object behavior characteristic;
sequencing the sixth interestingness characteristic to obtain a second object behavior characteristic;
performing aggregation processing on the first object behavior characteristics and the second object behavior characteristics to obtain a seventh interestingness characteristic set;
and processing the seventh interestingness feature set based on the service sample to determine the first interestingness feature set and the second interestingness feature set.
10. The method of claim 9, wherein the processing the seventh interestingness feature set based on the traffic sample to determine the first interestingness feature set and the second interestingness feature set comprises:
determining a preset strategy based on the service sample;
screening the seventh interestingness feature set based on the preset strategy to obtain features meeting the preset strategy and features not meeting the preset strategy;
calculating the average value of the features meeting the preset strategy to obtain a feature average value;
carrying out deletion marking processing on the features which do not meet the preset strategy to obtain a feature set subjected to deletion marking;
and splicing the feature average value and the feature set subjected to the missing mark to determine the first interestingness feature set and the second interestingness feature set.
11. The method according to claim 10, wherein the determining the first interest feature set and the second interest feature set by performing a stitching process on the feature mean and the feature set without a marker comprises:
splicing the feature average value and the feature set subjected to the missing mark to obtain a spliced feature set;
and determining the first interestingness feature set and the second interestingness feature set from the feature sets after splicing processing based on a preset strategy.
12. The method of claim 11, further comprising:
obtaining interest point classification results corresponding to N object samples of each classification model to be selected through a plurality of classification models to be selected based on the spliced feature set, wherein the classification models to be selected are different types of models respectively;
respectively training the classification models to be selected based on interest point classification results corresponding to N object samples of each classification model to be selected and interest degree labels corresponding to the N object samples to obtain a plurality of classification models;
determining the classification model to be trained from the plurality of classification models;
updating the model parameters of the classification model to be trained according to the target loss function based on the interest point classification results corresponding to the N object samples and the interest degree labels corresponding to the N object samples to obtain the target classification model.
13. An object recognition apparatus, characterized in that the object recognition apparatus comprises:
the system comprises an acquisition module, a service module and a service module, wherein the acquisition module is used for acquiring a target object aiming at a target service, and the target object comprises at least one of target object basic attribute information, target equipment basic attribute information and target network connection attribute information;
the processing module is used for performing characterization processing on the target object to obtain target object characteristics, wherein the target object characteristics and the target object have a corresponding relation;
the processing module is further configured to determine a first interestingness feature and a second interestingness feature after feature processing is performed on the basis of the target object feature, where a feature point score of the first interestingness feature is smaller than a first threshold, a feature point score of the second interestingness feature is larger than the first threshold, and the feature point score indicates an importance degree of the feature;
the processing module is further configured to perform noise adding processing on the first interestingness characteristic to obtain a first noise adding characteristic;
the obtaining module is further configured to obtain an interest point classification result corresponding to the target object through a target classification model based on the second interestingness feature and the first denoising feature;
the processing module is further configured to determine an interest degree label of the target object according to the interest point classification result corresponding to the target object.
14. A computer device, comprising: a memory, a transceiver, a processor, and a bus system;
wherein the memory is used for storing programs;
the processor is configured to execute a program in the memory to implement the method of any one of claims 1 to 12;
the bus system is used for connecting the memory and the processor so as to enable the memory and the processor to communicate.
15. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any of claims 1 to 12.
Background
With the development of internet technology, more and more demands can be served through the internet. Taking internet online education training as an example, people have more service demands in an education scene, so that users may have higher interest degree in internet online education advertisements, the specific expression of the higher interest degree is not limited to high advertisement click rate and high education product payment rate, and how to identify objects with high interest degree becomes more and more important. At present, the industry mainly predicts the probability that the current object is a high interest object or a common interest object by constructing a multi-dimensional feature and a model training method. However, the machine learning model can often be highly fitted to the sample data, so that the model parameters and the detailed prediction result can retain more original data features, thereby causing leakage of the original data.
Disclosure of Invention
The embodiment of the application provides a target object identification method, a target object identification device, computer equipment and a storage medium, wherein the accuracy of the classification result of the interest points output by a model is ensured on the premise of reserving important features, so that the accuracy of target object identification is improved, and the protection of any attribute information can be improved by scrambling the features of the target object.
In view of this, a first aspect of the present application provides a target object identification method, including:
acquiring a target object aiming at a target service, wherein the target object comprises at least one of target object basic attribute information, target equipment basic attribute information and target network connection attribute information;
performing characterization processing on a target object to obtain target object characteristics, wherein the target object characteristics and the target object have a corresponding relation;
based on the target object characteristics, determining a first interestingness characteristic and a second interestingness characteristic after characteristic processing is carried out, wherein the characteristic point score of the first interestingness characteristic is smaller than a first threshold value, the characteristic point score of the second interestingness characteristic is larger than the first threshold value, and the characteristic point score indicates the importance degree of the characteristics;
carrying out noise adding processing on the first interestingness characteristic to obtain a first noise adding characteristic;
based on the second interestingness characteristic and the first noise adding characteristic, obtaining an interest point classification result corresponding to the target object through the target classification model;
and determining the interest degree label of the target object according to the interest point classification result corresponding to the target object.
A second aspect of the present application provides an object recognition apparatus, including:
the system comprises an acquisition module, a service module and a service module, wherein the acquisition module is used for acquiring a target object aiming at a target service, and the target object comprises at least one of target object basic attribute information, target equipment basic attribute information and target network connection attribute information;
the processing module is used for carrying out characterization processing on the target object to obtain target object characteristics, wherein the target object characteristics and the target object have a corresponding relation;
the determining module is further used for determining a first interestingness characteristic and a second interestingness characteristic after characteristic processing is carried out on the basis of the target object characteristic, wherein the characteristic point score of the first interestingness characteristic is smaller than a first threshold, the characteristic point score of the second interestingness characteristic is larger than the first threshold, and the characteristic point score indicates the importance degree of the characteristic;
the processing module is further used for conducting noise adding processing on the first interestingness characteristic to obtain a first noise adding characteristic;
the obtaining module is further used for obtaining an interest point classification result corresponding to the target object through the target classification model based on the second interestingness characteristic and the first noise adding characteristic;
the determining module is further used for determining the interestingness label of the target object according to the interest point classification result corresponding to the target object.
In a possible embodiment, the obtaining module is further configured to obtain a service sample set for the target service;
the processing module is also used for carrying out characterization processing on the service sample set to obtain a service sample characteristic set;
the determining module is specifically configured to determine a first interestingness feature and a second interestingness feature after feature processing is performed on the basis of the target object feature and the service sample feature set.
In a possible embodiment, the obtaining module is specifically configured to determine a third interestingness feature and a fourth interestingness feature after performing feature processing based on the second interestingness feature and the first noisy feature, where a relevance between the third interestingness feature and another feature is smaller than a second threshold, and a relevance between the fourth interestingness feature and another feature is greater than the second threshold;
carrying out noise adding processing on the third interestingness characteristic to obtain a second noise adding characteristic;
and obtaining an interest point classification result corresponding to the target object through the target classification model based on the fourth interestingness characteristic and the second noise adding characteristic.
In one possible embodiment, the object recognition apparatus further comprises a training module;
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is further used for acquiring an object sample set aiming at a target service, the object sample set comprises N object samples, each object sample corresponds to an interestingness label, and each object sample comprises at least one of object basic attribute information, equipment basic attribute information and network connection attribute information;
the processing module is further used for performing characterization processing on the object sample set to obtain an object sample feature set, wherein the object sample feature set comprises N object sample features, and the object sample features and the object samples have corresponding relations;
the determining module is further used for determining a first interestingness characteristic set and a second interestingness characteristic set based on the object sample characteristic set, wherein the first interestingness characteristic set comprises P groups of object sample characteristics with characteristic point scores smaller than a first threshold, the second interestingness characteristic set comprises Q groups of object sample characteristics with characteristic point scores larger than the first threshold, the characteristic point scores indicate the importance degree of the characteristics, and P and Q are integers larger than or equal to 1;
the processing module is further used for conducting noise adding processing on the first interestingness feature set to obtain a first noise adding feature set;
the obtaining module is further used for obtaining interest point classification results corresponding to the N object samples through the classification model to be trained based on the second interest degree feature set and the first noise adding feature set;
and the training module is used for training the classification model to be trained according to the interest point classification results corresponding to the N object samples and the interest degree labels corresponding to the N object samples.
In a possible embodiment, the obtaining module is further configured to obtain a service sample set for the target service;
the processing module is also used for carrying out characterization processing on the service sample set to obtain a service sample characteristic set;
the determining module is specifically configured to determine a first interestingness feature set and a second interestingness feature set based on the object sample feature set and the business sample feature set.
In a possible embodiment, the obtaining module is specifically configured to determine a third interestingness feature set and a fourth interestingness feature set based on the second interestingness feature set and the first noisy feature set, where a relevance between object sample features in the third interestingness feature set is smaller than a second threshold, and a relevance between object sample features in the fourth interestingness feature set is greater than the second threshold;
carrying out noise adding processing on the third interestingness feature set to obtain a second noise adding feature set;
and obtaining interest point classification results corresponding to the N object samples through the classification model to be trained based on the fourth interestingness feature set and the second noise adding feature set.
In one possible embodiment, the obtaining module is specifically configured to obtain an initial object sample set for a target service;
determining a preset threshold range based on the target service;
based on a preset threshold range, determining N object samples from an initial object sample set of the target service.
In a possible implementation manner, the determining module is specifically configured to perform aggregation processing on the object sample feature set and the service sample feature set based on a plurality of preset time periods, and obtain a fifth interestingness feature set;
performing feature processing on the fifth interestingness feature set to obtain a sixth interestingness feature, wherein the feature processing comprises at least one of normalization feature processing and discretization feature processing;
and determining the first interestingness feature set and the second interestingness feature set based on the sixth interestingness feature.
In a possible implementation manner, the determining module is specifically configured to perform dimension reduction processing on the sixth interestingness feature to obtain a first object behavior feature;
sequencing the sixth interestingness characteristic to obtain a second object behavior characteristic;
performing aggregation processing on the first object behavior characteristics and the second object behavior characteristics to obtain a seventh interestingness characteristic set;
and processing the seventh interestingness feature set based on the service sample, and determining a first interestingness feature set and a second interestingness feature set.
In a possible embodiment, the determining module is specifically configured to determine a preset policy based on the service sample;
screening the seventh interestingness feature set based on a preset strategy to obtain features meeting the preset strategy and features not meeting the preset strategy;
calculating the average value of the features meeting the preset strategy to obtain the feature average value;
carrying out deletion marking processing on the features which do not meet the preset strategy to obtain a feature set subjected to deletion marking;
and splicing the feature average value and the feature set without the mark to determine a first interest degree feature set and a second interest degree feature set.
In a possible embodiment, the determining module is specifically configured to perform splicing processing on the feature average value and the feature set subjected to the missing mark to obtain a feature set subjected to splicing processing;
and determining a first interestingness feature set and a second interestingness feature set from the feature sets after splicing processing based on a preset strategy.
In a possible implementation manner, the obtaining module is further configured to obtain, based on the feature set after the stitching processing, interest point classification results corresponding to the N object samples of each classification model to be selected through a plurality of classification models to be selected, where the plurality of classification models to be selected are different types of models respectively;
the training module is further used for training the multiple classification models to be selected respectively based on the interest point classification results corresponding to the N object samples of each classification model to be selected and the interest degree labels corresponding to the N object samples to obtain multiple classification models;
the determining module is further used for determining a classification model to be trained from the plurality of classification models;
and the training module is further used for updating model parameters of the classification model to be trained according to the target loss function based on the interest point classification results corresponding to the N object samples and the interest degree labels corresponding to the N object samples to obtain the target classification model.
A third aspect of the present application provides a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to perform the method of the above-described aspects.
According to the technical scheme, the embodiment of the application has the following advantages:
in the embodiment of the application, a target object aiming at a target service is obtained, the target object comprises at least one of target object basic attribute information, target equipment basic attribute information and target network connection attribute information, then the target object is characterized to obtain target object characteristics, the target object characteristics and the target object have a corresponding relation, then, based on the target object characteristics, the characteristic processing is carried out to determine a first interestingness characteristic and a second interestingness characteristic, the characteristic point score of the first interestingness characteristic is smaller than a first threshold value, the characteristic point score of the second interestingness characteristic is larger than the first threshold value, the characteristic point score indicates the importance degree of the characteristics, then, the first interestingness characteristic is subjected to noise adding processing to obtain a first noise adding characteristic, and based on the second interestingness characteristic and the first noise adding characteristic, and obtaining an interest point classification result corresponding to the target object through the target classification model, and finally determining an interest degree label of the target object according to the interest point classification result corresponding to the target object. By the method, in the process of identifying the target object, the interestingness features with small feature point scores are scrambled, but the interestingness features with large feature point scores are not scrambled, and the interestingness features are indicated to be more important as the feature point scores are larger, so that the accuracy of the classification result of the interest points output by the model is ensured on the premise of keeping the important interestingness features, and the accuracy of identifying the target object is improved. Secondly, since the target object includes at least one of object basic attribute information, device basic attribute information, and network connection attribute information, protection of any attribute information can be improved by scrambling the features.
Drawings
FIG. 1 is a block diagram of an object recognition system according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a target object identification method according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an embodiment of a target object identification method in the embodiment of the present application;
FIG. 4 is a schematic diagram of another embodiment of a target object identification method in the embodiment of the present application;
FIG. 5 is a schematic diagram of an embodiment of training a classification model to be trained in the embodiment of the present application;
FIG. 6 is a schematic diagram of an embodiment of feature processing in an embodiment of the present application;
FIG. 7 is a schematic diagram of another embodiment of feature processing in an embodiment of the present application;
FIG. 8 is a schematic diagram of an embodiment of determining a classification model to be trained in the embodiment of the present application;
FIG. 9 is a schematic structural diagram of a depth feature crossbar network in an embodiment of the present application;
FIG. 10 is a schematic diagram of an embodiment of an object recognition apparatus in an embodiment of the present application;
FIG. 11 is a schematic diagram of an embodiment of a server in an embodiment of the present application;
fig. 12 is a schematic diagram of an embodiment of a terminal device in the embodiment of the present application.
Detailed Description
The embodiment of the application provides a target object identification method, a target object identification device, computer equipment and a storage medium, wherein the accuracy of the classification result of the interest points output by a model is ensured on the premise of reserving important features, so that the accuracy of target object identification is improved, and the protection of any attribute information can be improved by scrambling the features of the target object.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "corresponding" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
With the development of internet technology, more and more demands can be served through the internet. Taking internet online education training as an example, people have more service demands in an education scene, so that users may have higher interest degree in internet online education advertisements, the specific expression of the higher interest degree is not limited to high advertisement click rate and high education product payment rate, and how to identify objects with high interest degree becomes more and more important. At present, the industry mainly predicts the probability that the current object is a high interest object or a common interest object by constructing a multi-dimensional feature and a model training method. However, the machine learning model can often be highly fitted to the sample data, so that the model parameters and the detailed prediction result can retain more original data features, thereby causing leakage of the original data. Based on this, the embodiment of the application provides a target object identification method, which ensures the accuracy of the classification result of the interest points output by the model on the premise of keeping important features, thereby improving the accuracy of target object identification, and can improve the protection of any attribute information by scrambling the features of the target object.
For ease of understanding, some terms or concepts related to the embodiments of the present application are explained first.
First, privacy disclosure
Because the machine learning model can often highly fit sample data, more original privacy data characteristics of model parameters and detailed prediction results can be reserved, which is the source of the current privacy threat.
Two, differential privacy protection
Differential privacy protection is a means in cryptography with the aim of providing a way to maximize the accuracy of data queries while minimizing the chances of identifying their records when querying from a statistical database. In short, individual characteristics are removed on the premise that statistical characteristics are kept so as to protect object privacy.
Third, education training payment
The education training payment means that the commodity of the internet online education training is willing to pay for the service of enjoying the expense.
Fourth, high willingness to pay for education and training
The high willingness of education training payment is higher interest level for internet online education advertisement, and the expression of the high interest level includes but is not limited to high advertisement click rate, high education product payment rate and the like.
In the foregoing, some terms or concepts related to the embodiments of the present application are explained, and an application scenario of the embodiments of the present application is described below. It is understood that the target object identification method may be executed by a terminal device or a server. Referring to fig. 1, fig. 1 is a schematic diagram of an architecture of an object recognition system in an embodiment of the present application, and as shown in fig. 1, the object recognition system includes a terminal device and a server. Specifically, the server can update the model parameters of the classification model to be trained according to the target loss function by using the method provided by the embodiment of the present application to obtain the target classification model, so that after the server acquires the target object for the target service, the server can output the interest point classification result corresponding to the target object based on the target classification model and determine the interest degree label of the target object, and based on this, the server can also store the interest degree label of the target object on the block chain. Or after the terminal device acquires the target object for the target service, the terminal device may select to send the target object for the target service to the server, the server updates the model parameters of the classification model to be trained according to the target loss function by using the method provided in the embodiment of the present application to obtain the target classification model, outputs the interest point classification result corresponding to the target object based on the target classification model, determines the interest degree tag of the target object, and sends the interest degree tag of the target object to the terminal device.
Furthermore, the target object identification method provided by the embodiment of the application is not only suitable for determining scenes of high intention labels for educational training payment of the object, but also can identify the identification probability of corresponding scenes if samples of other scenes are adjusted and input, for example, in a scene of game payment, the high intention labels for game payment of the object are determined based on the target object identification method provided by the embodiment of the application; for another example, by adjusting different parameters and invoking different training models for identifying a specific object group such as a teenager, the application scenarios of the embodiments of the present application are not limited and are not exhaustive.
The server related to the application can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, Network service, cloud communication, middleware service, domain name service, safety service, Content Delivery Network (CDN), big data and an artificial intelligence platform. The terminal device may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, a vehicle-mounted terminal, a smart television, and the like. And the terminal device and the server can communicate with each other through a wireless network, a wired network or a removable storage medium. Wherein the wireless network described above uses standard communication techniques and/or protocols. The wireless Network is typically the internet, but can be any Network including, but not limited to, bluetooth, Local Area Network (LAN), Metropolitan Area Network (MAN), Wide Area Network (WAN), mobile, private, or any combination of virtual private networks. In some embodiments, custom or dedicated data communication techniques may be used in place of or in addition to the data communication techniques described above. The removable storage medium may be a Universal Serial Bus (USB) flash drive, a removable hard drive or other removable storage medium, and the like.
Although only five terminal devices and one server are shown in fig. 1, it should be understood that the example in fig. 1 is only used for understanding the present solution, and the number of the specific terminal devices and the number of the servers should be flexibly determined according to actual situations.
Since the target classification model introduced in the embodiment of the present application needs to be implemented based on the field of artificial intelligence, before the introduction of the target object identification method provided in the embodiment of the present application, some basic concepts in the field of artificial intelligence are introduced. Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making. The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed in various directions, and Natural Language Processing (NLP) is an important direction in the fields of computer science and artificial intelligence. It studies various theories and methods that enable efficient communication between humans and computers using natural language. Natural language processing is a science integrating linguistics, computer science and mathematics. Therefore, the research in this field will involve natural language, i.e. the language that people use everyday, so it is closely related to the research of linguistics. Natural language processing techniques typically include text processing, semantic understanding, machine translation, robotic question and answer, knowledge mapping, and the like. Secondly, Machine Learning (ML) is a multi-domain cross subject, and relates to multiple subjects such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
Fig. 2 is a schematic flowchart of a target object identification method according to an embodiment of the present application, where as shown in fig. 2, the target object identification method includes preparing a sample set of objects, performing feature processing, selecting a multi-class model, protecting differential privacy, and invoking a target classification model to determine an interestingness label of a target object. The following will describe functions and flows of the respective parts, specifically:
in step a1, based on the manual labeling and the business experience, an initial object sample set carrying an interestingness label is obtained, where the interestingness label indicates whether the object sample has a high willingness to pay or not, for example, an interestingness label of "1" indicates that the object sample has a high willingness to pay, and an interestingness label of "0" indicates that the object sample does not have a high willingness to pay. Based on the basic portrait of the initial object sample, the basic portrait comprises non-privacy behavior data of the object, such as whether to install relevant software of the target service and whether to use the target service. In order to eliminate the influence of a non-real object on modeling analysis, a preset threshold range is set based on the business experience of a target service, initial object samples which do not belong to the preset threshold range are filtered and screened, and then the filtered object sample set is stored in a Hadoop Distributed File System (HDFS) in an off-line manner, so that the object sample set can be conveniently acquired by a subsequent process.
In step a2, the filtered object sample set stored in step a1 is obtained from the HDFS, and an object sample feature set is constructed based on the filtered object sample set, where the object sample feature includes at least one of a user basic attribute information feature, a device basic attribute information feature, and a network connection attribute information feature. And acquiring a service sample set aiming at the target service, and performing characterization processing on the service sample set to obtain a service sample characteristic set. And aggregating the feature set of the object sample and the feature set of the service sample based on a plurality of preset time periods to obtain feature sets in different preset time periods, for example, aggregating the feature set of the object sample and the feature set of the service sample based on the last half year/last 3 months/last 1 month/last 1 week, and the aggregation processing method in the embodiment of the present application includes, but is not limited to, summation, median and standard deviation.
Based on this, feature processing is performed on feature sets in different preset time periods to obtain a feature set after the feature processing, where the feature processing in this embodiment includes but is not limited to normalized feature processing and discretized feature processing. And then performing dimension reduction processing on the feature set subjected to feature processing to obtain first object behavior features, performing sequencing processing on the feature set subjected to feature processing to obtain second object behavior features, and performing aggregation processing on the first object behavior features and the second object behavior features to obtain a feature set subjected to aggregation processing. And finally, determining a preset strategy based on the service sample, screening the feature set after the aggregation processing based on the preset strategy, and processing to determine a first interest degree feature set and a second interest degree feature set. The first set of interestingness features includes object sample features having feature point scores less than a first threshold, and the second set of interestingness features includes object sample features having feature point scores greater than the first threshold, the feature point scores indicating the degree of importance of the features.
In step A3, after the feature processing is completed in step a2, the sample set of feature processing is randomly divided into a training set and a test set. The method specifically comprises the steps of dividing according to a time window to which a sample belongs, wherein the time of the time is used as a training set, the time of the sample of the time of the sample of the time of the sample of the time of the sample of the time of the sample of the time of the sample of the time of the sample of the time of the sample of the time of the sample of the time of the sample. Then, based on default parameters, a plurality of types of models are trained in parallel, the Area (AUC) below a model evaluation index ROC curve selects the model with the best effect from the plurality of types of models, and the model with the best effect is determined as the classification model to be trained. In the embodiment of the present application, the multi-class models include, but are not limited to, Support Vector Machines (SVMs), Convolutional Neural Networks (CNNs), random-time-interval-based local-impact models (RALMs), depth-feature cross networks (DCNs), and the like. It should be understood that after the classification model to be trained, verification needs to be performed on the verification set to test the stability of the effect of the classification model to be trained, and in this embodiment, the depth feature cross network is determined as the classification model to be trained.
In step a4, noise processing is performed on the first interestingness feature set through the first interestingness feature set and the second interestingness feature set obtained in step a2 to obtain a first noise-added feature set, and thus the first layer of differential protection is completed. And then determining a third interestingness feature set and a fourth interestingness feature set based on the second interestingness feature set and the first noisy feature set, wherein the correlation degree between the object sample features in the third interestingness feature set is smaller than a second threshold value, and the correlation degree between the object sample features in the fourth interestingness feature set is larger than the second threshold value, and then conducting noisy processing on the third interestingness feature set to obtain a second noisy feature set and complete second-layer differential protection. And based on the fourth interest degree feature set and the second noise adding feature set, obtaining an interest point classification result corresponding to the object sample through the classification model to be trained, updating model parameters of the classification model to be trained according to a target loss function based on the interest point classification result corresponding to the object sample and an interest degree label corresponding to the object sample, and obtaining the target classification model, wherein the target loss function is obtained after noise adding processing is carried out on the target loss function so as to complete the third-layer differential protection. Based on the method, the model training process can be solidified, and off-line training, verification, alarming and solidification can be performed at regular time.
In step a5, the object can determine the target object for the target service for each service request of the target service, thereby obtaining a set of service samples for the target service. And then, completing feature calculation based on an on-line calculation engine to obtain a fourth interestingness feature and a second noise adding feature, and specifically, performing characterization processing on the target object to obtain a target object feature. And then based on the target object characteristics and the service sample characteristic set, determining a first interestingness characteristic and a second interestingness characteristic after characteristic processing, carrying out noise processing on the first interestingness characteristic to obtain a first noise characteristic, determining a third interestingness characteristic and a fourth interestingness characteristic after characteristic processing based on the second interestingness characteristic and the first noise characteristic, then carrying out noise processing on the third interestingness characteristic to obtain a second noise characteristic, obtaining an interest point classification result corresponding to the target object through a target classification model based on the fourth interestingness characteristic and the second noise characteristic, and finally determining an interest point label of the target object according to the interest point classification result corresponding to the target object.
With reference to the above description, a scheme provided in an embodiment of the present application relates to a machine learning technique of artificial intelligence, and a target object identification method in the present application is described below, please refer to fig. 3, where fig. 3 is a schematic diagram of an embodiment of the target object identification method in the embodiment of the present application, and as shown in fig. 3, an embodiment of the target object identification method in the embodiment of the present application includes:
101. and acquiring a target object aiming at the target service.
In this embodiment, the object recognition device pulls the latest target object for the target service from the online storage engine based on the preset period, and at this time, each service request of the target object pulls the real-time behavior data of the target object from the online, so that the target object of the target service specifically includes the real-time behavior data of the target object.
Specifically, the target object includes at least one of target object basic attribute information, target device basic attribute information, and target network connection attribute information. The target object basic attribute information includes, but is not limited to, a gender of the target object, a native place of the target object, a living city of the target object, and the like. Second, the target device basic attribute information includes, but is not limited to, a system version of the target object terminal device, a resolution of the target object terminal device, an application programming interface Level (API _ Level) of the target object terminal device, a Central Processing Unit (CPU) core number of the target object terminal device, and the like. And the target network connection attribute information includes, but is not limited to, the number of Wi-Fi (wireless-fidelity) connections of the target object terminal device, the earliest time of Wi-Fi connection of the target object terminal device each day, and the like.
102. And carrying out characterization processing on the target object to obtain the target object characteristics.
In this embodiment, the object identification apparatus performs characterization processing on the target object obtained in step 101 to obtain a target object feature, where the target object feature has a corresponding relationship with the target object sample, that is, the target object feature includes at least one of an object basic attribute information feature, an equipment basic attribute information feature, and a network connection attribute information feature.
103. And determining a first interestingness characteristic and a second interestingness characteristic after characteristic processing is carried out on the basis of the target object characteristic.
In this embodiment, the object recognition apparatus determines the first interestingness feature and the second interestingness feature in multiple steps of feature processing based on the target object feature obtained in step 102, and it should be understood that at this time, the first interestingness feature and the second interestingness feature both belong to one feature vector or one feature matrix, but the feature point score of the obtained first interestingness feature is smaller than the first threshold, and the feature point score of the obtained second interestingness feature is larger than the first threshold.
Specifically, the feature point score indicates the importance of the feature, and the feature point score is obtained by a depth boosting (deep) algorithm, which is a feature scoring algorithm based on back propagation, and the feature of the target object can be obtained by the deep scoring algorithm, important characteristics which have great influence on the classification result of the subsequent interest points, the importance of the characteristics is determined by the score of the characteristic points, the higher the score of the characteristic points is, the greater the importance degree is, and the lower the feature point score, the lower the importance, so that the target object feature having a feature point score smaller than the first threshold is determined as the first interestingness feature, that is, the target object feature included in the first interestingness feature has a low degree of importance, and the target object feature having the feature point score greater than the first threshold value is determined as the second interestingness feature, that is, the target object feature included in the second interestingness feature has a high degree of importance.
For example, the first threshold value is 60, and when the feature point score of the target object feature 1 is 50, the feature point score of the target object feature 2 is 80, the feature point score of the target object feature 3 is 30, and the feature point score of the target object feature 4 is 75, the target object feature 1 and the target object feature 3 may be determined as the first interestingness feature and the target object feature 2 and the target object feature 4 may be determined as the second interestingness feature based on the first threshold value (60), thereby showing that the importance of the target object feature 1 and the target object feature 3 is low and the importance of the target object feature 2 and the target object feature 4 is high. It should be understood that the value of the first threshold is determined by experiment and/or statistics based on a large amount of data, and the value of the first threshold is not specifically limited herein.
104. And carrying out noise adding processing on the first interestingness characteristic to obtain a first noise adding characteristic.
In this embodiment, the object identification apparatus performs noise addition processing on the first interestingness feature in step 103 to obtain a first noise addition feature. Since the higher the feature point score is, the greater the importance degree is, and the lower the feature point score is, the lower the importance degree is, as can be seen from step 103, the first interestingness feature includes the object sample feature whose feature point score is smaller than the first threshold, that is, the importance degree of the object sample feature in the first interestingness feature is lower, so that more noise is selected to be added to the first interestingness feature with the lower importance degree of the object sample feature, thereby preserving the integrity of the object sample feature with the higher importance degree of the object sample feature. In practical application, the feature point score is obtained based on a DeepLIFT algorithm, and the sample prior knowledge can be fully considered by carrying out noise addition processing based on the feature point score, so that important features in the features are reserved.
105. And obtaining an interest point classification result corresponding to the target object through the target classification model based on the second interest degree characteristic and the first noise adding characteristic.
In this embodiment, the object recognition apparatus inputs the second interestingness feature that is not subjected to the noise adding process in step 103 and the first noise adding feature obtained after the noise adding process in step 104 into the target classification model, and outputs the interest point classification result corresponding to the target object through the target classification model. In particular, the point of interest classification result indicates a predicted probability that the object sample has a high willingness to pay. Specifically, the present embodiment takes a target classification model as an example of a depth & cross networks (DCN), and in practical applications, the target classification model may be any one of a Support Vector Machine (SVM), a Convolutional Neural Network (CNN), a fine-choice recommendation algorithm (ral-time interval based look-and-feel model, or other deep learning models, and the target classification model is not limited herein.
106. And determining the interest degree label of the target object according to the interest point classification result corresponding to the target object.
In this embodiment, since the interest point classification result indicates that the object sample has the prediction probability of high willingness to pay, the object identification apparatus determines the interest degree label of the object according to the interest point classification result corresponding to the object obtained in step 105, that is, determines the interest point classification result larger than the preset threshold as having high willingness to pay, that is, the predicted interest degree label corresponding to the object sample is "1", and conversely, determines the interest point classification result smaller than the preset threshold as not having high willingness to pay, that is, the predicted interest degree label corresponding to the object sample is "0". In the present embodiment, the preset threshold is 60% as an example, but in practical application, the value of the preset threshold is determined through experiments and/or statistics based on a large amount of data, and the preset threshold is not specifically limited here.
In the object identification process, the features with small feature point scores are scrambled, but the features with large feature point scores are not scrambled, and the greater the feature point scores are, the more important the features are indicated, so that the accuracy of the classification result of the interest points output by the model is ensured on the premise of retaining the important features, and the accuracy of the target object identification is improved. Secondly, the target object comprises at least one of object basic attribute information, equipment basic attribute information and network connection attribute information, so that the protection of any attribute information can be promoted by scrambling the characteristics, the possibility of object privacy disclosure is reduced, and the privacy of the target object is protected.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the target object identification method provided in the embodiment of the present application, the target object identification method further includes:
acquiring a service sample set aiming at a target service;
performing characterization processing on the service sample set to obtain a service sample characteristic set;
based on the target object feature, determining a first interestingness feature and a second interestingness feature after feature processing, specifically comprising:
and determining a first interestingness characteristic and a second interestingness characteristic after characteristic processing is carried out on the basis of the target object characteristic and the service sample characteristic set.
In this embodiment, the object identification apparatus may further obtain a service sample set for the target service, where the service sample set may include an advertisement corresponding to the target service, a commodity corresponding to the target service, and the like. Based on this, the service sample set is characterized to obtain a service sample feature set, where the service sample feature set may include a click rate of the target sample on the advertisement corresponding to the target service or a conversion rate of the target sample on the advertisement corresponding to the target service, and the service sample set and the service sample feature set need to be determined according to the service characteristics and requirements of the target service, which is not limited herein.
Further, the vector representation of the target object features and the vector representation of the service sample feature set are prevented from having no correlation, so that incomplete feature extraction in the subsequent model processing process is avoided. Therefore, feature intersection processing is required. Namely, the object recognition device performs cross processing and subsequent feature processing on the target object feature and the business sample feature set feature to determine a first interestingness feature and a second interestingness feature. It should be understood that, since the feature intersection can reflect the correlation between features, at this time, the feature intersection processing is performed on the service sample feature set and the target object feature, the correlation between features needs to be considered in the subsequent feature processing, and the first interestingness feature and the second interestingness feature are feature matrices, not feature vectors.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the target object identification method provided in the embodiment of the present application, based on the second interestingness feature and the first denoising feature, the obtaining, by the target classification model, an interest point classification result corresponding to the target object specifically includes:
performing feature processing based on the second interestingness feature and the first noise adding feature, and determining a third interestingness feature and a fourth interestingness feature, wherein the relevance between the third interestingness feature and another feature is smaller than a second threshold, and the relevance between the fourth interestingness feature and another feature is larger than the second threshold;
carrying out noise adding processing on the third interestingness characteristic to obtain a second noise adding characteristic;
and obtaining an interest point classification result corresponding to the target object through the target classification model based on the fourth interestingness characteristic and the second noise adding characteristic.
In this embodiment, the object identification device specifically performs feature processing based on the second interestingness feature and the first noisy feature, and then determines the third interestingness feature and the fourth interestingness feature, where a relevance between the third interestingness feature and another feature is smaller than a second threshold, which indicates that the relevance between the third interestingness feature and other object sample features is smaller. On the contrary, the relevance between the fourth interestingness feature and another feature is larger than the second threshold, which indicates that the relevance between the fourth interestingness feature and other object sample features is larger, and the fourth interestingness feature can reflect more relevant feature information. It should be understood that the value of the second threshold is determined by experiment and/or statistics based on a large amount of data, and the value of the second threshold is not specifically limited herein.
Therefore, the object recognition device selects to add more noise to the third interestingness feature with smaller relevance between features, and obtains a second noisy feature. And then the object recognition device inputs the fourth interestingness feature which is not subjected to noise adding processing and the second noise adding feature which is obtained after the noise adding processing into the target classification model, and outputs the interest point classification result corresponding to the target object through the target classification model. Specifically, similar to step 105, the interest point classification result indicates a prediction probability that the object sample has a high willingness to pay, and is not described herein again. It should be understood that since the first interestingness feature and the second interestingness feature are further obtained based on the object sample feature and the business sample feature, and the first interestingness feature and the second interestingness feature are feature matrices, the fourth interestingness feature and the second noisy feature are also feature matrices.
In order to further understand the present solution, please refer to fig. 4, fig. 4 is a schematic view of another embodiment of the target object identification method in the embodiment of the present application, and as shown in fig. 4, a target object F1 for a target service is characterized to obtain a target object feature F2, a service sample set F3 is characterized to obtain a service sample feature set F4, then, based on the target object feature F2 and the service sample feature set F4, a first interest level feature and a second interest level feature F5 are determined, and a noise is added to a first interest level feature in the first interest level feature and the second interest level feature F5 to obtain a first noise added feature, so as to obtain a first noise added feature and a second interest level feature F6.
Further, based on the first noise adding feature and the second interest feature F6, a third interest feature and a fourth interest feature F7 are determined, and noise adding processing is performed on the third interest feature and the third interest feature in the fourth interest feature F7 to obtain a second noise adding feature, so as to obtain a second noise adding feature and a fourth interest feature F8, the second noise adding feature and the fourth interest feature F8 are used as inputs of the target classification model F9, the target classification model F9 outputs an interest point classification result corresponding to the target object, and then based on the method introduced in step 106, the interest point label F10 of the target object is determined according to the interest point classification result corresponding to the target object. It should be understood that fig. 4 is only used to further understand how the features are scrambled during the object identification process, and should not be understood as a specific limitation of the present solution.
In the embodiment of the application, another target object identification method is specifically provided, and first, a target object feature and a service object sample set are crossed, so that the obtained feature set can include more related information among features, and the accuracy of an output result of a target classification model is further ensured. Secondly, in the process of determining the interestingness label, the features with small feature point scores are scrambled, but the features with large feature point scores are not scrambled, and the features are indicated to be more important as the feature points have larger scores, so that the accuracy of the model output probability is ensured on the premise of keeping the important features. And scrambling the features with small association degree among the features, but not scrambling the features with large association degree among the features, thereby retaining the associated feature information among the object sample features, further ensuring the accuracy of the output result of the target classification model, and improving the accuracy of target object identification.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the target object identification method provided in the embodiment of the present application, the target classification model is obtained by updating model parameters of the classification model to be trained according to a target loss function based on the interest point classification results corresponding to the N object samples and the interest degree labels corresponding to the N object samples, where N is an integer greater than 1;
the target loss function is obtained after noise processing.
In this embodiment, the object identification module may further update a model parameter of the classification model to be trained according to a target loss function based on the interest point classification results corresponding to the N object samples and the interest degree labels corresponding to the N object samples to obtain a target classification model, where the target loss function is obtained after performing noise processing, and N is an integer greater than 1. Specifically, third differential protection is carried out in the model training process, noise is added specifically for a target loss function rather than a prediction result, and then the deviation of the optimal solution or the suboptimal solution brought by noise addition is corrected as much as possible by utilizing parameter self-adaption of a classification model to be trained in forward and feedback propagation.
Specifically, the object recognition device performs iterative training by using the interest point classification results corresponding to the N object samples as targets, that is, determining a loss value of a target loss function according to the difference between the interest point classification results corresponding to the N object samples and the interest degree labels corresponding to the N object samples, determining whether the loss function reaches a convergence condition according to the loss value of the target loss function, and if the loss value does not reach the convergence condition, updating model parameters of the classification model to be trained by using the loss value of the target loss function.
Next, the convergence condition of the target loss function may be that the value of the loss function is smaller than or equal to a preset threshold of the first loss function, for example, the value of the preset threshold of the first loss function may be 0.005, 0.01, 0.02 or other values close to 0. For example, the value of the second loss function preset threshold may be 0.005, 0.01, 0.02 or other values close to 0, and other convergence conditions may also be adopted, which is not limited herein. It should be understood that, in practical applications, the objective loss function may also be a mean square error loss function, a ranking loss (ranking loss) function, a focal loss (focal loss) function, and the like, and is not limited herein.
In the embodiment of the application, a method for scrambling a target loss function is provided, wherein noise is added according to the target loss function instead of a prediction result, and then the deviation of the optimal solution or suboptimal solution caused by noise addition is corrected by utilizing parameter self-adaption of a classification model to be trained in forward and feedback propagation, so that the reliability of the obtained target classification model is improved, the accuracy of an output result of the target classification model is improved, and the accuracy of target object identification is further improved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the target object identification method provided in the embodiment of the present application, the target object identification method further includes:
acquiring an object sample set aiming at a target service, wherein the object sample set comprises N object samples, each object sample corresponds to an interestingness label, and each object sample comprises at least one of object basic attribute information, equipment basic attribute information and network connection attribute information;
performing characterization processing on an object sample set to obtain an object sample feature set, wherein the object sample feature set comprises N object sample features, and the object sample features and the object samples have a corresponding relation;
determining a first interestingness feature set and a second interestingness feature set based on the object sample feature set, wherein the first interestingness feature set comprises P object sample features with feature point scores smaller than a first threshold value, the second interestingness feature set comprises Q object sample features with feature point scores larger than the first threshold value, and P and Q are integers larger than or equal to 1;
carrying out noise adding processing on the first interestingness feature set to obtain a first noise adding feature set;
based on the second interest degree feature set and the first noise adding feature set, obtaining interest point classification results corresponding to the N object samples through a classification model to be trained;
and training the classification model to be trained according to the interest point classification results corresponding to the N object samples and the interest degree labels corresponding to the N object samples.
In this embodiment, as can be seen from the flowchart shown in fig. 2, the object recognition apparatus obtains a set of object samples for the target service in the storage space of the HDFS, where each object sample corresponds to an interestingness label, and the object recognition apparatus in this embodiment aims to determine whether the object has a high willingness to pay, so that the interestingness label indicates whether the object sample has a high willingness to pay, for example, an interestingness label of "1" indicates that the object sample has a high willingness to pay, and an interestingness label of "0" indicates that the object sample does not have a high willingness to pay. Specifically, each object sample includes at least one of object basis attribute information, device basis attribute information, and network connection attribute information. The information included in the object sample is similar to the information included in the target object, and is not described herein again.
Further, the object identification device performs characterization processing on the obtained object sample set to obtain an object sample feature set, where the object sample feature set includes N object sample features, and the object sample features and the object samples have a corresponding relationship, that is, the object sample features include at least one of an object basic attribute information feature, an equipment basic attribute information feature, and a network connection attribute information feature.
Then, the object identification device performs feature processing and the like to determine a first interest degree feature set and a second interest degree feature set based on the obtained object sample feature set, and it should be understood that, at this time, the first interest degree feature set and the second interest degree feature set belong to one feature vector or one feature matrix, but the obtained first interest degree feature set includes P object sample features having feature point scores smaller than a first threshold, the second interest degree feature set includes Q object sample features having feature point scores larger than the first threshold, and P and Q are integers greater than or equal to 1. Specifically, the aforementioned feature point score indicates the importance of the feature, and the specific feature point score is described in detail in step 103 and will not be described herein again. Therefore, the object sample features with the feature point scores smaller than the first threshold are determined as the first interestingness feature set, that is, the object sample features in the first interestingness feature set are less important, and the object sample features with the feature point scores larger than the first threshold are determined as the second interestingness feature set, that is, the object sample features in the second interestingness feature set are more important.
In order to include the privacy information of the object, the differential inclusion described in the foregoing embodiment needs to be performed, so the object identification apparatus performs the noise processing on the first interestingness feature set to obtain the first noisy feature set. Since the higher the feature point score is, the greater the importance degree is, and the lower the importance degree is, the first interestingness feature set includes the object sample features whose feature point scores are smaller than the first threshold, that is, the importance degree of the object sample features in the first interestingness feature set is lower, so that more noise is selected to be added to the first interestingness feature set with the lower importance degree of the object sample features, thereby preserving the integrity of the object sample features with the higher importance degree of the object sample features. In practical application, the feature point score is obtained based on a DeepLIFT algorithm, and the sample prior knowledge can be fully considered by carrying out noise addition processing based on the feature point score, so that important features in a feature set are reserved.
And further, the object recognition device inputs the second interestingness feature set which is not subjected to the noise adding processing and the first noise adding feature set obtained after the noise adding processing is carried out in the step 104 into the classification model to be trained, and outputs the interest point classification results corresponding to the N object samples through the classification model to be trained. Specifically, the interest point classification result indicates a prediction probability that the object sample has a high willingness to pay, and the interest point classification result greater than the preset threshold is determined as having a high willingness to pay, that is, the predicted interest degree label corresponding to the object sample is "1", whereas the interest point classification result less than the preset threshold is determined as not having a high willingness to pay, that is, the predicted interest degree label corresponding to the object sample is "0". In the present embodiment, the preset threshold is 60% as an example, but in practical application, the value of the preset threshold is determined through experiments and/or statistics based on a large amount of data, and the preset threshold is not specifically limited here.
Specifically, the classification model to be trained at this time may be any one of Support Vector Machines (SVMs), Convolutional Neural Networks (CNNs), a refined Recommendation Algorithm (RALM), and depth-feature cross networks (DCNs), or other deep learning models.
And finally, the object recognition device trains the classification model to be trained according to the interest point classification results corresponding to the N object samples and the interest degree labels corresponding to the N object samples. Specifically, the object recognition device performs iterative training by using the interest point classification results corresponding to the N object samples as targets, that is, determining a loss value of a target loss function according to the difference between the interest point classification results corresponding to the N object samples and the interest degree labels corresponding to the N object samples, determining whether the loss function reaches a convergence condition according to the loss value of the target loss function, and if the loss value does not reach the convergence condition, updating model parameters of the classification model to be trained by using the loss value of the target loss function.
To further understand the present solution, please refer to fig. 5, fig. 5 is a schematic diagram of an embodiment of training a classification model to be trained in the present embodiment, and as shown in fig. 5, B1 refers to a set of object samples (including interestingness labels corresponding to N object samples) for a target service, B2 refers to a set of object sample features, B3 refers to a first interestingness feature set, B4 refers to a second interestingness feature set, B5 refers to a first noisy feature set obtained by performing noisy processing on the first interestingness feature set B3 in the manner described in the foregoing embodiment, B6 refers to a classification model to be trained, and B7 refers to a classification result of interest points corresponding to N object samples output by the classification model to be trained B6 based on the second interestingness feature set B4 and the first noisy feature set B5. Based on this, the object sample set B1 including the interestingness labels corresponding to the N object samples, the interest point classification result B6 corresponding to the N object samples, and the target loss function are iteratively trained on the classification model to be trained, it should be understood that the example in fig. 5 is only for convenience of understanding the present solution, and is not used for limiting the present solution.
In the embodiment of the application, a method for training a target classification model is provided, by the method, in model training, features with small feature point scores are scrambled, but features with large feature point scores are not scrambled, and the larger the feature point score is, the more important the feature is indicated, so that the accuracy of model output probability is ensured on the premise of keeping important features, and secondly, each object sample comprises at least one of object basic attribute information, equipment basic attribute information and network connection attribute information, so that the protection of any attribute information can be promoted by scrambling the features, the possibility of object privacy disclosure is reduced, and the object privacy is protected.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the target object identification method provided in the embodiment of the present application, the target object identification method further includes:
acquiring a service sample set aiming at a target service;
performing characterization processing on the service sample set to obtain a service sample characteristic set;
determining a first interestingness feature set and a second interestingness feature set based on the object sample feature set, comprising:
and determining a first interestingness characteristic set and a second interestingness characteristic set based on the object sample characteristic set and the business sample characteristic set.
In this embodiment, the object identification apparatus may further obtain a service sample set for the target service, where the service sample set may include an advertisement corresponding to the target service, a commodity corresponding to the target service, and the like. Based on this, the service sample set is characterized to obtain a service sample feature set, where the service sample feature set may include a click rate of the target sample on the advertisement corresponding to the target service or a conversion rate of the target sample on the advertisement corresponding to the target service, and the service sample set and the service sample feature set need to be determined according to the service characteristics and requirements of the target service, which is not limited herein.
Further, in order to avoid that there is no connection between the vector representation of the object sample feature set and the vector representation of the business sample feature set, a feature intersection process is performed. Namely, the object recognition device performs feature cross processing and subsequent feature processing on the object sample feature set and the business sample feature set to determine a first interestingness feature set and a second interestingness feature set. The first interestingness feature set obtained at this time includes P object sample features and business sample features with feature point scores smaller than a first threshold, and the second interestingness feature set includes Q object sample features and business sample features with feature point scores larger than the first threshold. It should be understood that, since the feature intersection can reflect the correlation between features, at this time, the feature intersection processing is performed on the service sample feature set and the object sample feature set, the correlation between features needs to be considered in the subsequent feature processing process, and the first interestingness feature set and the second interestingness feature set are combined into a feature matrix, not a feature vector.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the target object identification method provided in the embodiment of the present application, based on the second interestingness feature set and the first noisy feature set, the obtaining, by the to-be-trained classification model, the interest point classification results corresponding to the N object samples specifically includes:
determining a third interestingness feature set and a fourth interestingness feature set based on the second interestingness feature set and the first noisy feature set, wherein the relevance between object sample features in the third interestingness feature set is smaller than a second threshold, and the relevance between object sample features in the fourth interestingness feature set is larger than the second threshold;
carrying out noise adding processing on the third interestingness feature set to obtain a second noise adding feature set;
and obtaining interest point classification results corresponding to the N object samples through the classification model to be trained based on the fourth interestingness feature set and the second noise adding feature set.
In this embodiment, the object identification apparatus needs to further consider the correlation between the second interestingness feature set and the object sample features in the first noisy feature set, so that the object sample features are adaptively added with noise for the second time. The object identification device specifically determines a third interestingness feature set and a fourth interestingness feature set based on the second interestingness feature set and the first noisy feature set, wherein the relevance between object sample features in the third interestingness feature set is smaller than a second threshold, and the relevance between object sample features in the fourth interestingness feature set is larger than the second threshold.
Specifically, the relevance between the object sample features in the third interestingness feature set is smaller than the second threshold, which indicates that the relevance between the object sample features in the third interestingness feature set and other object sample features is smaller. On the contrary, the relevance between the object sample features in the fourth interestingness feature set is greater than the second threshold, which indicates that the relevance between the object sample features in the fourth interestingness feature set and other object sample features is greater, and the object sample features in the fourth interestingness feature set can reflect more relevant feature information. It should be understood that the value of the second threshold is determined by experiment and/or statistics based on a large amount of data, and the value of the second threshold is not specifically limited herein.
Based on the above, the object identification device selects to add more noise to the features with smaller relevance between the features, that is, selects to add more noise to the third interestingness feature set with lower relevance between the features of the object sample, so as to obtain the second noisy feature set. But the fourth interestingness feature set with high relevance between the features of the object sample is not subjected to noise adding, so that relevance feature information between the features of the object sample is reserved.
And then the object recognition device inputs the fourth interestingness feature set which is not subjected to noise adding processing and the second noisy feature set obtained after noise adding processing into the classification model to be trained, and outputs interest point classification results corresponding to the N object samples through the classification model to be trained. Specifically, the interest point classification result indicates a prediction probability that the object sample has a high willingness to pay, and is not described herein again. It should be understood that, since the first interestingness feature set and the second interestingness feature set are further obtained based on the object sample feature set and the service sample feature set, and the first interestingness feature set and the second interestingness feature set are combined into the feature matrix, the fourth interestingness feature set and the second noisy feature set are also feature matrices, and in this embodiment, a deep feature cross network (DCN) is determined as the classification model to be trained.
As can be seen from fig. 2, the step of feature processing is included in the object identification process, and based on the foregoing embodiment, the step of scrambling in the feature processing will be described in detail through fig. 6. Referring to fig. 6, fig. 6 is a schematic diagram of an embodiment of feature processing in the embodiment of the present application, as shown in fig. 6, a feature clustering process is performed on an object sample set G1 for a target service to obtain an object sample feature set G2, a feature set aggregation process is performed on a service sample set G3 to obtain a service sample feature set G4, then, based on the object sample feature set G2 and the service sample feature set G4, a first interest degree feature set and a second interest degree feature set G5 are determined, and a noise processing is performed on the first interest degree feature set in the first interest degree feature set and the first interest degree feature set in the second interest degree feature set G5 to obtain a first noise-added feature set, so as to obtain a first noise-added feature set and a second interest degree feature set G6.
Further, a third interestingness feature set and a fourth interestingness feature set G7 are determined based on the first noisy feature set and the second interestingness feature set G6, noise processing is performed on the third interestingness feature set in the third interestingness feature set and the third interestingness feature set in the fourth interestingness feature set G7 to obtain a second noisy feature set, so that a second noisy feature set and a fourth interestingness feature set G8 are obtained, the second noisy feature set and the fourth interestingness feature set G8 are used as inputs of a classification model G9 to be trained, and the classification model G9 to be trained outputs interest point classification results corresponding to N object samples. It should be understood that fig. 9 is only used to further understand how to scramble the feature set during the model training process, and should not be understood as a specific limitation of the present scheme.
In the embodiment of the application, a method for processing input features before model training is provided, and the obtained feature set can include more related information among features through a cross object sample feature set and a business object sample set, so that the accuracy of a model output result is ensured. And secondly, scrambling is carried out on the features with small association degree among the features, but not on the features with large association degree among the features, so that the associated feature information among the features of the object sample is reserved, and the accuracy of the output result of the model is further ensured.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the target object identification method provided in the embodiment of the present application, the obtaining an object sample set for a target service specifically includes:
acquiring an initial object sample set aiming at a target service;
determining a preset threshold range based on the target service;
based on a preset threshold range, determining N object samples from an initial object sample set of the target service.
In this embodiment, the object recognition device specifically obtains an initial object sample set for a target service, where the initial object sample set is obtained based on a manual screening manner, and then obtains an object sample carrying an interest level tag based on a preset rule, where the interest level tag indicates whether the object sample has a high willingness to pay, for example, if the interest level tag is "1", the object sample has a high willingness to pay, and if the interest level tag is "0", the object sample does not have a high willingness to pay. Secondly, under the scene of high payment willingness of the education training, the preset rules can enable the number of installed education application software of the terminal equipment of the object sample to be more than 3, or enable the number of consumed education training corresponding services of the object sample to be more than 2. In a scenario where the game is applied to a high willingness to pay, the preset rule may be that the number of game-class application software installed in the terminal device of the object sample exceeds 5, or the number of service businesses consumed in the game by the object sample exceeds 3, and the like, and is not limited herein.
Based on this, the object recognition device obtains a base representation of the initial object sample, where the base representation includes non-private behavior data of the object, such as whether to install software related to the target service and whether to use the target service. In an actual application scenario, a false object or a situation that a computer controls a mobile phone may exist, so in order to eliminate an influence of an unreal object on modeling analysis, the object recognition device further needs to set a preset threshold range based on a service experience of a target service, for example, the preset threshold range is a flow use condition of an object sample using a software product corresponding to the target service, or the preset threshold range is time distribution generated by a flow of the object sample using the software product corresponding to the target service.
Further, the object recognition device determines N object samples from the initial object sample set of the target service based on a preset threshold range. Specifically, a "lareya criterion" is used to perform an abnormal value judgment criterion, specifically, assuming that a group of initial object sample sets only contain random errors, the initial object sample sets are calculated to obtain a standard deviation, and based on a determined preset threshold range, it is considered that errors exceeding the preset threshold range are not random errors but gross errors, and the initial object sample sets containing the gross errors are removed. The object recognition device filters and screens initial object samples which do not belong to a preset threshold range, then generates an object sample set based on N object samples obtained after filtering, and stores the object sample set in the HDFS, wherein the object sample set is directly obtained from the HDFS when the object sample set is required to be used.
In the embodiment of the application, a method for screening object samples in object identification is provided, wherein abnormal object samples are screened from an initial object sample set, and the abnormal object samples are filtered and presented, namely, the abnormal object samples are not used for training a model, so that the influence of the abnormal object samples on subsequent model training is avoided, namely, the accuracy of the obtained model is prevented from being reduced, and the reliability of model training is improved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the target object identification method provided in the embodiment of the present application, the determining, based on the object sample feature set and the business sample feature set, the first interestingness feature set and the second interestingness feature set specifically includes:
aggregating the object sample feature set and the service sample feature set based on a plurality of preset time periods to obtain a fifth interest degree feature set;
performing feature processing on the fifth interestingness feature set to obtain a sixth interestingness feature, wherein the feature processing comprises at least one of normalization feature processing and discretization feature processing;
and determining the first interestingness feature set and the second interestingness feature set based on the sixth interestingness feature.
In this embodiment, in consideration of the fact that the "willingness to pay" in the present scheme belongs to the long-term stable requirement of the object, at this time, the characteristics of the object sample and the service sample in multiple preset time periods need to be calculated, that is, the object identification apparatus specifically performs aggregation processing on the object sample characteristic set and the service sample characteristic set based on the multiple preset time periods, so as to obtain a fifth interestingness characteristic set. Specifically, the object sample feature sets and the service sample feature sets in different preset time periods are aggregated in combination with a plurality of preset time periods to obtain feature sets (i.e. fifth interest feature sets) in different preset time periods, for example, the object sample feature sets and the service sample feature sets in approximately half a year are aggregated, or the object sample feature sets and the service sample feature sets in approximately 3 months are aggregated, it should be understood that the preset time periods may include approximately half a year, approximately 3 months, approximately 1 month, approximately 1 week, and the like. Secondly, the aggregation processing method in this embodiment includes, but is not limited to, summation, median, standard deviation, etc., and the foregoing examples should not be construed as limitations of this embodiment.
Based on this, the object recognition device performs feature processing on feature sets (i.e., fifth interestingness feature sets) in different preset time periods to obtain sixth interestingness features, and the object recognition device determines the first interestingness feature set and the second interestingness feature set based on the sixth interestingness features after the feature processing.
Specifically, the feature processing in this embodiment includes at least one of normalized feature processing and discretized feature processing, and the gaussian normalization is selected for the normalized feature processing in this embodiment, which is not described herein again. The following mainly describes how to perform the process of the scatter characterization process.
(1) One-Hot coding (One-Hot Encoding)
One-Hot Encoding mainly uses an N-bit status register to encode N states, each state being represented by its own independent register bit and only One bit being active at any time. In the actual task of machine learning, the features are sometimes not always continuous values, but may be classified values, for example, the gender may be classified into "male" and "female". In the machine learning task, for such features, it is usually necessary to digitize the features, for example, One-Hot Encoding is performed on the gender of the object sample included in the object basic attribute information feature in the fifth interestingness feature set, and the obtained result is: male (1, 0) and female (0, 1).
(2) Counting code (Count Encoding)
Count Encoding may be used for discrete variables or continuous variables with fewer values. For example, for a WiFi point of Interest (POI) feature of an object sample, the POI of the object may be a house, a shop, a mailbox, or a bus station, etc. And identifying the interest degree of the object sample to the POI based on Count Encoding, for example, the POI of the object sample of 'food-Chinese dish-YueCai' is removed 3 times in the same week.
(3) Merging coding (Consolidation Encoding)
The association Encoding summarizes a plurality of values under one type of characteristic variable into the same information. For example, for the system version of the target terminal device included in the device basic attribute information feature in the fifth interestingness feature set, values of the system version of the target terminal device include "4.2", "4.4", and "5.0", and the association Encoding may generalize the three values of "4.2", "4.4", and "5.0" into a "low-version system". Based on experiments, the Consolidation Encoding respectively encodes single characteristics of 4.2, 4.4 and 5.0 to bring greater forward benefits than direct homonymy One-Hot Encoding.
(4) Word Embedding (Category Embedding)
After the coding processing of different features in the fifth interest level feature set is completed through the One-Hot Encoding, Count Encoding or correlation Encoding described above, since the feature sets in the scheme all include many different types of features, the features have both discrete and continuous features, and the features of different types have strong sparsity. Based on this, One-Hot Encoding processing is generally performed on the discrete features of the Category type, but the dimension of the input features after the One-Hot Encoding processing is very high and very high, and in order to reduce the dimension, avoid the over-fitting of subsequent model training and improve the stability of the obtained model, the Category Embedding is adopted to convert the discrete features into low-dimensional dense Embedding variables of real values.
(5) Missing value Embedding (NaN Embedding)
In the foregoing feature processing, feature missing may occur, and then missing value Embedding (NaN Embedding) processing for the feature may include methods such as "removal method", "mean value filling method", and "missing mark method". By converting the missing values of the features into the Embedding expression, the model training effect is further brought with greater positive benefits. It should be understood that the Category Embedding and NaN Embedding operations are actually the multiplication of the input after Encoding by a feature matrix, which is the same as other parameters in the network and needs to be learned with the network, and can also be regarded as a query (look up).
The removing method specifically comprises a simple removing method and a weight removing method, wherein the simple removing method is to remove the fifth interestingness feature with a missing value under the condition that the fifth interestingness feature set reaches a target, and then the weight removing method is to reduce the deviation by weighting complete data when the missing value of the fifth interestingness feature is not completely lost randomly, namely after the incomplete data of the fifth interestingness feature is marked, each fifth interestingness feature in the fifth interestingness feature set is endowed with different weights, the weight of the complete fifth interestingness feature can be obtained by logic (logistic) regression or probability unit (probit) regression, and then part of the fifth interestingness feature is removed according to the weights.
Secondly, the possible value interpolation missing value can be interpolated by the most possible value, which is less than the information loss caused by deleting the incomplete target sample and the service sample. In data mining, since deletion of one attribute value results in discarding a large number of other attribute values, which is a great waste of information, there are ideas and methods for interpolating a missing value with a possible value, and the following methods are commonly used:
1. and (4) mean value interpolation. Dividing the attribute of the data into a fixed distance type and a non-fixed distance type, and if the missing value of the characteristic is the fixed distance type, interpolating the missing value by using the average value of the attribute existing values of the characteristic; if the missing value of the feature is of a non-interval type, the missing value is filled up by the mode of the attribute of the feature (i.e., the value with the highest frequency of occurrence) according to the mode principle in statistics.
2. Mean interpolation of the same kind is used. The same mean value interpolation method belongs to single value interpolation, and is different in that the same mean value interpolation hierarchical clustering model is used for predicting the type of a missing variable, and then the mean value interpolation of the type is used.
3. Multiple Interpolation (MI). The idea of multiple interpolation is derived from bayesian estimation, where the value to be interpolated is considered to be random, and its value is used to multiply interpolate the observed value.
In the embodiment of the application, another method for processing input features before object identification is provided, wherein a feature set of an object sample and a feature set of a service sample are aggregated based on a plurality of preset time periods to obtain features of the object sample and the service sample in the plurality of preset time periods, and a time dimension is introduced to process the features, so that a long-term stable demand feature of the object for a target service can be obtained, and the features include information with more dimensions. Secondly, the features of different types can be further coded and extracted through feature processing, and the coded features are further processed through word embedding, so that the fact that a subsequent model is over-fit through training is avoided, and the stability of the obtained model is improved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the target object identification method provided in the embodiment of the present application, the determining the first interestingness feature set and the second interestingness feature set based on the sixth interestingness feature specifically includes:
performing dimensionality reduction processing on the sixth interestingness characteristic to obtain a first object behavior characteristic;
sequencing the sixth interestingness characteristic to obtain a second object behavior characteristic;
performing aggregation processing on the first object behavior characteristics and the second object behavior characteristics to obtain a seventh interestingness characteristic set;
and processing the seventh interestingness feature set based on the service sample, and determining a first interestingness feature set and a second interestingness feature set.
In this embodiment, the object identification apparatus specifically performs a dimension reduction process on the sixth interestingness feature to obtain a first object behavior feature. Specifically, the object recognition device inputs the sixth interestingness feature into the deep neural network model, performs Embedding on WiFi connection trajectory data of the object, and takes an Embedding layer as Wi-Fi behavior information (i.e., the first object behavior feature) of the object sample after completing deep neural network model training. For example, the object sample a is fixedly connected with 2 Wi-Fi every day, the object sample B is connected with different Wi-Fi every day, and it is physically explained that the object sample a is an object group with stable travel rules, and the object sample B is an object group with fluctuating travel rules, so that different object behavior characteristics corresponding to the object sample a and the object sample B can be obtained through the above manner.
Secondly, the object recognition device also needs to sort the sixth interestingness feature to obtain a second object behavior feature. Specifically, the object recognition device performs embedded extraction on the traffic usage behavior sequence of different software corresponding to the target service used by the object sample based on a List-embedded (List-embedded) manner, so as to obtain a low-dimensional dense object behavior feature (second object behavior feature).
Based on the first object behavior characteristics and the second object behavior characteristics, aggregation processing is carried out to obtain a seventh interestingness characteristic set. The object recognition device can store the seventh interestingness feature set in the HDFS, so that the subsequent processing flow can be accessed quickly. Further, since the seventh interestingness feature set is generated through a plurality of aforementioned processes, and the data quality of the features is difficult to guarantee, feature data quality monitoring is required, so that feature cleaning and feature filtering for feature verification are also required to be performed on the seventh interestingness feature set, that is, the object identification apparatus determines the first interestingness feature set and the second interestingness feature set by processing the seventh interestingness feature set based on the service sample.
In the embodiment of the application, another method for processing the input features before object recognition is provided, and the sixth interestingness feature is subjected to dimension reduction processing and sorting processing to obtain the object behavior feature which is more consistent with the real behavior of the object, so that the accuracy of model training can be further improved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the target object identification method provided in the embodiment of the present application, based on the service sample, the seventh interestingness feature set is processed to determine the first interestingness feature set and the second interestingness feature set, which specifically includes:
determining a preset strategy based on the service sample;
screening the seventh interestingness feature set based on a preset strategy to obtain features meeting the preset strategy and features not meeting the preset strategy;
calculating the average value of the features meeting the preset strategy to obtain the feature average value;
carrying out deletion marking processing on the features which do not meet the preset strategy to obtain a feature set subjected to deletion marking;
and splicing the feature average value and the feature set without the mark to determine a first interest degree feature set and a second interest degree feature set.
In this embodiment, it can be known from the foregoing embodiment that, since the seventh interestingness feature set is generated through a plurality of the foregoing processes, the data quality of the features is difficult to guarantee, and therefore, the feature data quality monitoring needs to be performed, and therefore, feature cleaning and feature filtering needs to be performed on the seventh interestingness feature set to perform feature verification. The object identification device specifically determines a preset strategy based on the service sample, and screens the seventh interestingness feature set based on the preset strategy to obtain features meeting the preset strategy and features not meeting the preset strategy. Specifically, the object identification device determines a preset policy based on the business experience of the business sample, and cleans, filters and verifies a seventh interest level feature set, where the preset policy includes, but is not limited to, a period of time for using the application software corresponding to the target business every day being less than 16 hours, and the period of time for using the application software by the object sample being greater than 24 hours, which is an abnormal feature, based on which, the seventh interest level feature set is verified according to the preset policy, and a feature satisfying the preset policy and a feature not satisfying the preset policy are determined from the seventh interest level feature set, for example, a feature of using the application software corresponding to the target business every day being less than 16 hours is a feature satisfying the preset policy, a feature of using the application software corresponding to the target business every day being greater than 16 hours is a feature not satisfying the preset policy, and a feature of using the application software by the object sample being greater than 24 hours is a feature not satisfying the preset policy (specifically, a feature is a feature not satisfying the preset policy) Abnormal features).
Further, through the processing, the features meeting the service requirements are spliced, the features not meeting the requirements are subjected to missing marking, and finally the mode entering feature splicing is realized, so that a first interestingness feature set and a second interestingness feature set which can be input into the classification model to be trained are obtained. Specifically, the object recognition device calculates the average value of the features meeting the preset strategy to obtain a feature average value, then performs missing marking processing on the features not meeting the preset strategy to obtain a feature set after missing marking, and finally performs splicing processing on the feature average value and the feature set after missing marking to determine a first interest degree feature set and a second interest degree feature set.
For example, if a group of features included in the seventh interestingness feature set is (0.2, 0.1, 0.9, 4, 0), where "4" is a feature that does not satisfy the preset policy, and "0.2", "0.1", "0.9" and "0" are features that satisfy the preset policy, the features that satisfy the preset policy are first averaged, that is, "0.2", "0.1", "0.9" and "0" are averaged, and the feature average value is "0.4". And then, carrying out deletion marking processing on the features which do not meet the preset strategy, namely carrying out deletion marking on the ' 4 ', namely marking the ' 4 ' as ' -1 ', wherein the obtained group of features are (0.2, 0.1, 0.9, -1, 0), and finally filling the ' 0.4 into the position marked with the ' -1 ', thereby obtaining (0.2, 0.1, 0.9, 0.4, 0). The first interestingness feature set and the second interestingness feature set can be determined by performing the above processing on the plurality of groups of features included in the seventh interestingness feature set.
In the embodiment of the application, another method for processing input features before object recognition is provided, because the seventh interestingness feature set is generated through feature processing for multiple times, the data quality of the features in the seventh interestingness feature set is difficult to guarantee, the data quality of the features in the seventh interestingness feature set is monitored and screened based on a service sample, the splicing of the input features is realized, and finally the input features are abnormal and available, so that the reliability of model training is improved.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the target object identification method provided in the embodiment of the present application, the feature average value and the feature set without the mark are subjected to stitching processing, and the first interest degree feature set and the second interest degree feature set are determined, which specifically includes:
splicing the feature average value and the feature set subjected to the missing mark to obtain a spliced feature set;
and determining a first interestingness feature set and a second interestingness feature set from the feature sets after splicing processing based on a preset strategy.
In this embodiment, the object identification apparatus specifically performs a splicing process on the feature average value and the feature set after the missing mark to obtain a feature set after the splicing process, where the obtained feature set after the splicing process includes (0.2, 0.1, 0.9, 0.4, 0) of the example shown in the foregoing embodiment. At this time, the obtained feature set after the splicing processing is combined with the non-module-entering feature, and the scheme also needs to determine a first interest degree feature set and a second interest degree feature set from the feature set after the splicing processing based on a preset strategy. It should be understood that the first interestingness feature set and the second interestingness feature set both belong to feature sets after the stitching process, and the feature sets after the stitching process are combined into one feature matrix, so that the first interestingness feature set and the second interestingness feature set in the present scheme are not two independent feature matrices, but rather, features with different feature points in one feature matrix are distinguished.
Specifically, as can be seen from the foregoing embodiments, the feature point score indicates the importance degree of the feature, and the feature point score is specifically obtained through a deep algorithm, which is a feature scoring algorithm based on back propagation, and the feature set after the stitching process can be obtained through the deep algorithm, important characteristics which have great influence on the classification result of the subsequent interest points, the higher the score of the characteristic points is, the greater the importance degree is, and the lower the score of the feature point, the lower the importance degree, so that the feature with the score of the feature point smaller than the first threshold value in the feature set after the splicing processing is determined as the first interestingness feature set, that is, the importance degree of the features in the first interestingness feature set is low, and the object sample features with feature point scores greater than the first threshold in the feature set after the stitching processing are determined as the second interestingness feature set, that is, the importance degree of the features in the second interestingness feature set is high. It should be understood that the value of the first threshold is determined by experiment and/or statistics based on a large amount of data, and the value of the first threshold is not specifically limited herein.
In the embodiment of the application, a method for processing input features before object identification is provided, and feature scoring is performed on each feature in a feature set after splicing processing through a DeepLIFT algorithm to indicate the importance degree of each feature, so that features which have large influence and small influence on subsequent interest point classification results are screened out, noise processing is performed on a first interest degree feature set conveniently, and feasibility of the scheme is improved.
Specifically, the foregoing embodiment introduces a manner how to perform feature processing before model training, and for further understanding of the present disclosure, please refer to fig. 7, fig. 7 is a schematic diagram of another embodiment of feature processing in the embodiment of the present disclosure, as shown in fig. 7, an object recognition device first performs feature processing on an object sample set C1 for a target service to obtain an object sample feature set C2, and performs feature processing on a service sample set C3 for the target service to obtain a service sample feature set C4. Based on this, the object recognition device performs aggregation processing on the object sample feature set C2 and the business sample feature set C4 based on a plurality of preset time periods to obtain a fifth interestingness feature set C5, and performs feature processing including at least one of normalization feature processing and discretization feature processing on the fifth interestingness feature set C5 to obtain a sixth interestingness feature C6.
Further, the object recognition device performs dimension reduction on the sixth interestingness characteristic C6 to obtain a first object behavior characteristic C7, performs ranking processing on the sixth interestingness characteristic C6 to obtain a second object behavior characteristic C8, and then performs aggregation processing on the first object behavior characteristic C7 and the second object behavior characteristic C8 to obtain a seventh interestingness characteristic set C9. Furthermore, the object recognition device determines a preset strategy based on the service sample, and based on the preset strategy, screens the seventh interestingness feature set C9 to obtain features meeting the preset strategy and features not meeting the preset strategy, performs average calculation on the features meeting the preset strategy in the seventh interestingness feature set C9 to obtain a feature average C10, and performs deletion marking on the features not meeting the preset strategy in the seventh interestingness feature set C9 to obtain a feature set C11 after deletion marking.
Finally, the feature average value C10 and the feature set C11 after the missing mark are spliced to obtain a spliced feature set C12, at this time, the object recognition apparatus performs noise processing on the first interest feature set including the P object sample features whose feature point scores are smaller than the first threshold according to the method described in the foregoing embodiment in the spliced feature set C12 to obtain a first noise feature set, at this time, the feature set includes the first noise feature set and a second interest feature set including the Q object sample features whose feature point scores are larger than the first threshold. Further, a third interestingness feature set and a fourth interestingness feature set are determined from a feature set composed of the second interestingness feature set and the first noisy feature set, the correlation between object sample features in the third interestingness feature set is smaller than a second threshold, and the correlation between object sample features in the fourth interestingness feature set is larger than the second threshold. And further performing noise processing on the third interestingness feature set to obtain a second noise-added feature set, and finally inputting the feature set obtained after the two noise-added processing into the classification model C13 to be trained. It should be understood that fig. 7 is intended to facilitate an understanding of how the present solution performs the processing of input features specifically prior to model training, and should not be construed as a limitation of the present solution.
It should be understood that, as can be seen from fig. 2, the step of feature processing is included in the object recognition process, and the specific method of feature processing includes the method of scrambling features in the model shown in fig. 6 and the method of feature processing before model training shown in fig. 7, and in this embodiment, fig. 6 and fig. 7 are not described together, but should not be construed as a limitation of the present solution.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the target object identification method provided in the embodiment of the present application, the target object identification method further includes:
obtaining interest point classification results corresponding to N object samples of each classification model to be selected through a plurality of classification models to be selected based on the feature set after splicing, wherein the classification models to be selected are models of different types;
respectively training a plurality of classification models to be selected based on interest point classification results corresponding to N object samples of each classification model to be selected and interest degree labels corresponding to the N object samples to obtain a plurality of classification models;
a classification model to be trained is determined from a plurality of classification models.
In this embodiment, the object identification module may further obtain, based on the feature set after the concatenation processing, the interest point classification results corresponding to the N object samples of each classification model to be selected through the multiple classification models to be selected, where the multiple classification models to be selected are models of different types, respectively train the multiple classification models to be selected based on the interest point classification results corresponding to the N object samples of each classification model to be selected and the interest degree labels corresponding to the N object samples, so as to obtain multiple classification models, and finally determine the classification model to be trained from the multiple classification models. Specifically, the ratio of the training set to the validation set in the embodiment of the present application is 5:1, and the ratio of the training set to the validation set should not be construed as a limitation of the present application. And then training a plurality of classification models to be selected in parallel based on default parameters to obtain a plurality of classification models, selecting a model with the best effect from the plurality of classification models by the model evaluation index AUC, and determining the classification model with the best effect as the classification model to be trained. The greater the model evaluation index AUC value corresponding to the classification model is, the more likely the current classification model is to arrange the positive samples in front of the negative samples to obtain better classification results, and it should be understood that the model evaluation index AUC itself is irrelevant to the absolute value of model prediction, only the ordering effect is concerned, and the method for calculating the model evaluation index AUC is closer to the requirement of actual business, and the classification capability of the classification model on the positive examples and the negative examples is considered at the same time, so that the classification model can still be reasonably evaluated under the condition of unbalanced samples. Secondly, the multi-class models in the embodiment of the present application include, but are not limited to, SVM, CNN, RALM, DCN, and the like, and the classification model to be trained is taken as the DCN as an example for subsequent description.
For convenience of understanding, the types of the multiple classification models to be selected include SVM, CNN and DCN as examples for explanation, please refer to fig. 8, fig. 8 is a schematic diagram of an embodiment of determining the classification model to be trained in the embodiment of the present application, as shown in fig. 8, D1 refers to a feature set after a stitching process, D2 refers to a support vector machine to be selected, D3 refers to a convolutional neural network to be selected, and D4 refers to a deep feature cross network to be selected. Based on this, in the diagram (a) in fig. 8, the feature set D1 after the stitching processing is input to the support vector machine to be selected D2, the interest point classification result a corresponding to the N object samples output by the support vector machine to be selected D2 can be obtained, and then the support vector machine to be selected D2 is trained based on the interest point classification result a corresponding to the N object samples and the interest degree labels corresponding to the N object samples, so as to obtain the support vector machine D5 shown in the diagram (B) in fig. 8.
Similarly, as shown in fig. 8 (a), the feature set D1 after the stitching process is input to the convolutional neural network D3 to be selected, so that the interest point classification result B corresponding to the N object samples output by the convolutional neural network D3 to be selected can be obtained, and then the convolutional neural network D3 to be selected is trained based on the interest point classification result B corresponding to the N object samples and the interest degree labels corresponding to the N object samples, so as to obtain the convolutional neural network D6 shown in fig. 8 (B). Next, in the diagram (a) in fig. 8, the feature set D1 after the stitching processing is input to the depth feature intersection network D4 to be selected, the interest point classification result C corresponding to the N object samples output by the depth feature intersection network D4 to be selected can be obtained, and then the depth feature intersection network D4 to be selected is trained based on the interest point classification result C corresponding to the N object samples and the interest degree labels corresponding to the N object samples, so as to obtain the depth feature intersection network D7 shown in the diagram (B) in fig. 8.
In the diagram (B) in fig. 8, the calculation of the AUC values of the model evaluation indicators is performed for the support vector machine D5, the convolutional neural network D6, and the depth feature crossing network D7, and the classification model with the highest score of the AUC values of the obtained model evaluation indicators is determined as the classification model D8 to be trained, which is described in the embodiment of the present application.
Further, referring to fig. 9 for describing the DCN in detail, fig. 9 is a schematic structural diagram of a Deep feature crossover Network in the embodiment of the present application, and as shown in fig. 9, an object sample set and a service sample set are used as inputs of an embedding and stacking layer E1, a fourth interestingness feature set and a second noisy feature set obtained after noise addition are output by the embedding and stacking layer E1 by the method described in the foregoing embodiment, and the fourth interestingness feature set and the second noisy feature set are used as common inputs of a crossover Network (Cross Network) E2 and a Deep Network (Deep Network) E3.
Therein, the core idea of the crossover network E2 is to apply explicit eigen-crossover in an efficient way, the crossover network E2 being composed of a number of crossover layers (i.e. crossover layer E21, crossover layer E22, crossover layer E23 and more), each crossover layer having the following formula:
;(1)
wherein, XL+1Is the output of the L +1 th cross layer, XLIs the output of the L-th cross layer, bLAnd WLIs the connection parameter between the L-th cross layer and the L + 1-th cross layer.
It should be understood that all variables in equation (1) are column vectors, not matrices. The output of each cross layer is the output of the last cross layer plus. WhileThat is, the residual between the output at the layer and the output at the previous layer is fitted. While the total number of parameters of the crossover network E2 is very small, the complexity introduced by the crossover network E2 is negligible, the dimensions of each layer are kept consistent, the final output is still equal to the input dimensions, so the few parameters of the crossover network limit the model capacity, and in order to capture highly nonlinear interactions, the DCN introduces the deep network E3 in parallel.
Secondly, the depth network E3 is a fully connected feedforward neural network, and the depth network E3 is composed of a plurality of depth layers (i.e., depth layer E31, depth layer E32, depth layer E33, and more depth layers), each depth layer having the following formula:
;(2)
wherein HL+1Is the output of the L +1 th depth layer, HLIs the output of the L-th depth layer, bLAnd WLIs the connection parameter between the lth depth layer and the L +1 th depth layer.
The Combination Layer (Combination Layer) E4 concatenates the common input outputs of the cross network E2 and the depth network E3, and then obtains an initial prediction probability (logits) after a weighted summation, and the initial prediction probability obtains interest point classification results (i.e., prediction probabilities) corresponding to the N object samples through a sigmoid function.
In the embodiment of the application, a method for screening models in object recognition is provided, classification models obtained by various types of classification models to be selected are screened, only the ordering effect is concerned because the model evaluation index AUC is irrelevant to the absolute value of model prediction, and the method is closer to leaf repair of target business.
Optionally, on the basis of the embodiment corresponding to fig. 3, in an optional embodiment of the target object identification method provided in the embodiment of the present application, the target object identification method further includes:
updating model parameters of the classification model to be trained according to a target loss function based on the interest point classification results corresponding to the N object samples and the interest degree labels corresponding to the N object samples to obtain a target classification model, wherein the target loss function is obtained after noise adding.
In this embodiment, the object identification module may further update the model parameters of the classification model to be trained according to the target loss function based on the interest point classification results corresponding to the N object samples and the interest degree labels corresponding to the N object samples, to obtain the target classification model, where the target loss function is obtained after the noise addition processing. Specifically, third differential protection is carried out in the model training process, noise is added specifically for a target loss function rather than a prediction result, and then the deviation of the optimal solution or the suboptimal solution brought by noise addition is corrected as much as possible by utilizing parameter self-adaption of a classification model to be trained in forward and feedback propagation.
Specifically, the object recognition device performs iterative training by using the interest point classification results corresponding to the N object samples as targets, that is, determining a loss value of a target loss function according to the difference between the interest point classification results corresponding to the N object samples and the interest degree labels corresponding to the N object samples, determining whether the loss function reaches a convergence condition according to the loss value of the target loss function, and if the loss value does not reach the convergence condition, updating model parameters of the classification model to be trained by using the loss value of the target loss function.
Next, the convergence condition of the target loss function may be that the value of the loss function is smaller than or equal to a preset threshold of the first loss function, for example, the value of the preset threshold of the first loss function may be 0.005, 0.01, 0.02 or other values close to 0. For example, the value of the second loss function preset threshold may be 0.005, 0.01, 0.02 or other values close to 0, and other convergence conditions may also be adopted, which is not limited herein. It should be understood that, in practical applications, the objective loss function may also be a mean square error loss function, a ranking loss (ranking loss) function, a focal loss (focal loss) function, and the like, and is not limited herein.
In the embodiment of the application, another target object identification method is provided, wherein noise is added according to a target loss function instead of a prediction result, and then the deviation of the optimal solution or the suboptimal solution caused by noise addition is corrected by utilizing parameter self-adaption of a classification model to be trained in forward and feedback propagation, so that the reliability of the obtained target classification model is improved, and the accuracy of an output result of the target classification model is improved.
Referring to fig. 10, fig. 10 is a schematic view of an embodiment of an object recognition apparatus in an embodiment of the present application, and as shown in the drawing, the object recognition apparatus 900 includes:
an obtaining module 901, configured to obtain a target object for a target service, where the target object includes at least one of target object basic attribute information, target device basic attribute information, and target network connection attribute information;
the processing module 902 is configured to perform characterization processing on a target object to obtain a target object feature, where the target object feature and the target object have a corresponding relationship;
the determining module 903 is further configured to determine, based on the target object feature, a first interestingness feature and a second interestingness feature after feature processing is performed, where a feature point score of the first interestingness feature is smaller than a first threshold, a feature point score of the second interestingness feature is larger than the first threshold, and the feature point score indicates an importance degree of the feature;
the processing module 902 is further configured to perform noise adding processing on the first interestingness feature to obtain a first noise adding feature;
the obtaining module 901 is further configured to obtain an interest point classification result corresponding to the target object through the target classification model based on the second interest degree feature and the first noise adding feature;
the determining module 903 is further configured to determine an interest degree label of the target object according to the interest point classification result corresponding to the target object.
Optionally, on the basis of the embodiment corresponding to fig. 10, in another embodiment of the object identifying apparatus 900 provided in the embodiment of the present application, the obtaining module 901 is further configured to obtain a service sample set for a target service;
the processing module 902 is further configured to perform characterization processing on the service sample set to obtain a service sample feature set;
the determining module 903 is specifically configured to determine a first interestingness feature and a second interestingness feature after performing feature processing based on the target object feature and the service sample feature set.
Optionally, on the basis of the embodiment corresponding to fig. 10, in another embodiment of the object identification apparatus 900 provided in the embodiment of the present application, the obtaining module 901 is specifically configured to determine a third interestingness feature and a fourth interestingness feature after performing feature processing based on the second interestingness feature and the first noisy feature, where a relevance between the third interestingness feature and another feature is smaller than a second threshold, and a relevance between the fourth interestingness feature and another feature is larger than the second threshold;
carrying out noise adding processing on the third interestingness characteristic to obtain a second noise adding characteristic;
and obtaining an interest point classification result corresponding to the target object through the target classification model based on the fourth interestingness characteristic and the second noise adding characteristic.
Optionally, on the basis of the embodiment corresponding to fig. 10, in another embodiment of the object recognition apparatus 900 provided in the embodiment of the present application, the object recognition apparatus 900 further includes a training module 904;
the obtaining module 901 is further configured to obtain an object sample set for a target service, where the object sample set includes N object samples, each object sample corresponds to one interestingness label, and each object sample includes at least one of object basic attribute information, device basic attribute information, and network connection attribute information;
the processing module 902 is further configured to perform characterization processing on the object sample set to obtain an object sample feature set, where the object sample feature set includes N object sample features, and the object sample features and the object samples have a corresponding relationship;
a determining module 903, configured to determine, based on the object sample feature set, a first interestingness feature set and a second interestingness feature set, where the first interestingness feature set includes P sets of object sample features whose feature point scores are smaller than a first threshold, the second interestingness feature set includes Q sets of object sample features whose feature point scores are greater than the first threshold, the feature point scores indicate degrees of importance of the features, and P and Q are integers greater than or equal to 1;
the processing module 903 is further configured to perform noise adding processing on the first interestingness feature set to obtain a first noise adding feature set;
the obtaining module 901 is further configured to obtain, based on the second interestingness feature set and the first noisy feature set, interest point classification results corresponding to the N object samples through a classification model to be trained;
the training module 904 is configured to train the classification model to be trained according to the interest point classification results corresponding to the N object samples and the interest degree labels corresponding to the N object samples.
Optionally, on the basis of the embodiment corresponding to fig. 10, in another embodiment of the object identifying apparatus 900 provided in the embodiment of the present application, the obtaining module 901 is further configured to obtain a service sample set for a target service;
the processing module 902 is further configured to perform characterization processing on the service sample set to obtain a service sample feature set;
the determining module 903 is specifically configured to determine a first interestingness feature set and a second interestingness feature set based on the object sample feature set and the business sample feature set.
Optionally, on the basis of the embodiment corresponding to fig. 10, in another embodiment of the object identification apparatus 900 provided in the embodiment of the present application, the obtaining module 901 is specifically configured to determine a third interestingness feature set and a fourth interestingness feature set based on the second interestingness feature set and the first noisy feature set, where a relevance between object sample features in the third interestingness feature set is smaller than a second threshold, and a relevance between object sample features in the fourth interestingness feature set is greater than the second threshold;
carrying out noise adding processing on the third interestingness feature set to obtain a second noise adding feature set;
and obtaining interest point classification results corresponding to the N object samples through the classification model to be trained based on the fourth interestingness feature set and the second noise adding feature set.
Optionally, on the basis of the embodiment corresponding to fig. 10, in another embodiment of the object identification apparatus 900 provided in the embodiment of the present application, the obtaining module 901 is specifically configured to obtain an initial object sample set for a target service;
determining a preset threshold range based on the target service;
based on a preset threshold range, determining N object samples from an initial object sample set of the target service.
Optionally, on the basis of the embodiment corresponding to fig. 10, in another embodiment of the object identification apparatus 900 provided in this embodiment of the present application, the determining module 903 is specifically configured to aggregate the object sample feature set and the service sample feature set based on a plurality of preset time periods, and obtain a fifth interestingness feature set;
performing feature processing on the fifth interestingness feature set to obtain a sixth interestingness feature, wherein the feature processing comprises at least one of normalization feature processing and discretization feature processing;
and determining the first interestingness feature set and the second interestingness feature set based on the sixth interestingness feature.
Optionally, on the basis of the embodiment corresponding to fig. 10, in another embodiment of the object identification apparatus 900 provided in the embodiment of the present application, the determining module 903 is specifically configured to perform dimension reduction processing on the sixth interestingness feature to obtain a first object behavior feature;
sequencing the sixth interestingness characteristic to obtain a second object behavior characteristic;
performing aggregation processing on the first object behavior characteristics and the second object behavior characteristics to obtain a seventh interestingness characteristic set;
and processing the seventh interestingness feature set based on the service sample, and determining a first interestingness feature set and a second interestingness feature set.
Optionally, on the basis of the embodiment corresponding to fig. 10, in another embodiment of the object identification apparatus 900 provided in the embodiment of the present application, the determining module 903 is specifically configured to determine a preset policy based on a service sample;
screening the seventh interestingness feature set based on a preset strategy to obtain features meeting the preset strategy and features not meeting the preset strategy;
calculating the average value of the features meeting the preset strategy to obtain the feature average value;
carrying out deletion marking processing on the features which do not meet the preset strategy to obtain a feature set subjected to deletion marking;
and splicing the feature average value and the feature set without the mark to determine a first interest degree feature set and a second interest degree feature set.
Optionally, on the basis of the embodiment corresponding to fig. 10, in another embodiment of the object identification apparatus 900 provided in this embodiment of the present application, the determining module 903 is specifically configured to perform a splicing process on the feature average value and the feature set subjected to the missing mark to obtain a feature set subjected to the splicing process;
and determining a first interestingness feature set and a second interestingness feature set from the feature sets after splicing processing based on a preset strategy.
Optionally, on the basis of the embodiment corresponding to fig. 10, in another embodiment of the object identification apparatus 900 provided in the embodiment of the present application, the obtaining module 901 is further configured to obtain, based on the feature set after the stitching processing, interest point classification results corresponding to the N object samples of each classification model to be selected through a plurality of classification models to be selected, where the classification models to be selected are different types of models respectively;
the training module 904 is further configured to train the multiple classification models to be selected respectively based on the interest point classification results corresponding to the N object samples of each classification model to be selected and the interest degree labels corresponding to the N object samples, so as to obtain multiple classification models;
a determining module 903, configured to determine a classification model to be trained from the multiple classification models;
the training module 904 is further configured to update model parameters of the classification model to be trained according to the target loss function based on the interest point classification results corresponding to the N object samples and the interest degree labels corresponding to the N object samples, so as to obtain a target classification model.
An embodiment of the present application further provides another object identification apparatus, where the object identification apparatus may be disposed in a server or a terminal device, and the object identification apparatus is disposed in the server in this application as an example, please refer to fig. 11, where fig. 11 is an illustration of an embodiment of the server in the embodiment of the present application, as shown in the figure, the server 1000 may generate a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 1022 (e.g., one or more processors) and a memory 1032, and one or more storage media 1030 (e.g., one or more mass storage devices) storing an application 1042 or data 1044. Memory 1032 and storage medium 1030 may be, among other things, transient or persistent storage. The program stored on the storage medium 1030 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, a central processor 1022 may be disposed in communication with the storage medium 1030, and configured to execute a series of instruction operations in the storage medium 1030 on the server 1000.
The Server 1000 may also include one or more power supplies 1026, one or more wired or wireless network interfaces 1050, one or more input-output interfaces 1058, and/or one or more operating systems 1041, such as a Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTMAnd so on.
The steps performed by the server in the above embodiment may be based on the server structure shown in fig. 11.
The server includes a CPU 1022 for executing the embodiment shown in fig. 3 and the corresponding embodiments in fig. 3.
The present application further provides a terminal device, configured to execute the steps executed by the object recognition apparatus in the embodiment shown in fig. 3 and the embodiments corresponding to fig. 3. As shown in fig. 11, for convenience of explanation, only the parts related to the embodiments of the present application are shown, and details of the technology are not disclosed, please refer to the method part of the embodiments of the present application. Taking a terminal device as a mobile phone as an example for explanation:
fig. 12 is a block diagram illustrating a partial structure of a mobile phone related to a terminal provided in an embodiment of the present application. Referring to fig. 12, the cellular phone includes: radio Frequency (RF) circuitry 1110, memory 1120, input unit 1130, display unit 1140, sensors 1150, audio circuitry 1160, wireless fidelity (WiFi) module 1170, processor 1180, and power supply 1190. Those skilled in the art will appreciate that the handset configuration shown in fig. 12 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 12:
RF circuit 1110 may be used for receiving and transmitting signals during a message transmission or call, and in particular, for receiving downlink messages from a base station and then processing the received downlink messages to processor 1180; in addition, the data for designing uplink is transmitted to the base station. In general, RF circuit 1110 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 1110 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), and the like.
The memory 1120 may be used to store software programs and modules, and the processor 1180 may execute various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 1120. The memory 1120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1120 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 1130 may be used to receive input numeric or character information and generate key signal inputs related to object settings and function control of the cellular phone. Specifically, the input unit 1130 may include a touch panel 1131 and other input devices 1132. Touch panel 1131, also referred to as a touch screen, can collect touch operations of an object on or near the touch panel 1131 (e.g., operations of the object on or near touch panel 1131 using any suitable object or accessory such as a finger, a stylus, etc.) and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 1131 may include two parts, namely, a touch detection device and a touch controller. The touch detection device detects the touch direction of an object, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1180, and can receive and execute commands sent by the processor 1180. In addition, the touch panel 1131 can be implemented by using various types, such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 1130 may include other input devices 1132 in addition to the touch panel 1131. In particular, other input devices 1132 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1140 may be used to display information input by or provided to the object and various menus of the cellular phone. The Display unit 1140 may include a Display panel 1141, and optionally, the Display panel 1141 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 1131 can cover the display panel 1141, and when the touch panel 1131 detects a touch operation on or near the touch panel, the touch panel is transmitted to the processor 1180 to determine the type of the touch event, and then the processor 1180 provides a corresponding visual output on the display panel 1141 according to the type of the touch event. Although in fig. 12, the touch panel 1131 and the display panel 1141 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 1131 and the display panel 1141 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 1150, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1141 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1141 and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 1160, speakers 1161, and microphone 1162 may provide an audio interface between the subject and the handset. The audio circuit 1160 may transmit the electrical signal converted from the received audio data to the speaker 1161, and convert the electrical signal into a sound signal for output by the speaker 1161; on the other hand, the microphone 1162 converts the collected sound signals into electrical signals, which are received by the audio circuit 1160 and converted into audio data, which are then processed by the audio data output processor 1180, and then transmitted to, for example, another cellular phone via the RF circuit 1110, or output to the memory 1120 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the cell phone can help the object to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 1170, and provides wireless broadband internet access for the object. While fig. 12 shows the WiFi module 1170, it is to be understood that it does not belong to the essential component of the handset.
The processor 1180 is a control center of the mobile phone, and is connected to various parts of the whole mobile phone through various interfaces and lines, and executes various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1120 and calling data stored in the memory 1120, thereby performing overall monitoring of the mobile phone. Optionally, processor 1180 may include one or more processing units; preferably, the processor 1180 may integrate an application processor, which mainly handles operating systems, object interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated within processor 1180.
The phone also includes a power supply 1190 (e.g., a battery) for powering the various components, and preferably, the power supply may be logically connected to the processor 1180 via a power management system, so that the power management system may manage charging, discharging, and power consumption management functions.
Although not shown, the mobile phone may further include a camera, a bluetooth module, and the like, which are not described herein.
In the embodiment of the present application, the terminal includes a processor 1180 configured to execute the embodiment shown in fig. 3 and the corresponding embodiments in fig. 3.
An embodiment of the present application further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the steps executed by the object recognition apparatus in the method described in the foregoing embodiment shown in fig. 3 and the methods described in fig. 3.
Also provided in an embodiment of the present application is a computer program product including a program, which, when run on a computer, causes the computer to perform the steps performed by the object recognition apparatus in the method described in the foregoing embodiment shown in fig. 3.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.