Electronic card selection method and device, terminal and storage medium
1. A method for selecting an electronic card, comprising:
acquiring current scene information, and determining a scene type according to the scene information;
and selecting a candidate electronic card matched with the scene type as a target electronic card.
2. The selecting method according to claim 1, wherein the obtaining current scene information and determining a scene type according to the scene information includes:
receiving a scene image fed back by the intelligent glasses;
identifying a photographic subject contained within the scene image;
and determining the scene type according to all the shooting subjects.
3. The selecting method according to claim 1, wherein the obtaining current scene information and determining a scene type according to the scene information includes:
collecting environmental sound under a current scene;
acquiring a frequency domain spectrum of the environmental sound, and determining a sound production main body contained in the current scene according to a frequency value contained in the frequency domain spectrum;
and determining the scene type according to all the sounding subjects.
4. The selecting method according to claim 1, wherein the obtaining current scene information and determining a scene type according to the scene information includes:
acquiring current position information and extracting scene keywords contained in the position information;
respectively calculating the confidence probability of each candidate scene according to the confidence degrees of the candidate scenes associated with all the scene keywords;
and selecting the candidate scene with the highest confidence probability as the scene type corresponding to the position information.
5. The method according to any one of claims 1 to 4, wherein the selecting a candidate electronic card matching the scene type as a target electronic card comprises:
respectively calculating the matching degree between each candidate electronic card and the scene type;
and selecting the candidate electronic card with the highest matching degree as the target electronic card.
6. The method of selecting as claimed in claim 5, wherein after said selecting a candidate electronic card matching said scene type as a target electronic card, further comprising:
executing card swiping authentication operation through the target electronic card and card swiping equipment;
if the card swiping authentication fails, selecting the candidate electronic card with the highest matching degree from all the candidate electronic cards except the target electronic card as a new target electronic card, and returning to execute the card swiping operation executed by the target electronic card and the card swiping equipment until the card swiping authentication succeeds.
7. The method according to any one of claims 1 to 4, wherein the selecting a candidate electronic card matching the scene type as a target electronic card comprises:
acquiring a standard scene of each candidate electronic card;
and matching the scene type with each standard scene, and determining the target electronic card according to a matching result.
8. An apparatus for selecting an electronic card, comprising:
the scene type determining unit is used for acquiring current scene information and determining a scene type according to the scene information;
and the electronic card selecting unit is used for selecting the candidate electronic card matched with the scene type as a target electronic card.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
Background
In daily life, a user can perform operations such as payment and authentication through an entity card, but as the service types are increased, the number of corresponding entity cards is increased, and due to the development of electronic technology, the entity card can be converted into an electronic card and bound with an intelligent terminal, so as to perform related payment and authentication operations. However, in the existing electronic card technology, when a user performs operations such as authentication and payment, the user needs to manually select the electronic card associated with the current operation, so that the operation difficulty is increased, and the operation efficiency is low.
Disclosure of Invention
The embodiment of the application provides an electronic card selection method, an electronic card selection device, a terminal and a storage medium, and can solve the problems that in the existing electronic card technology, an electronic card associated with the current operation needs to be manually selected, so that the operation difficulty is increased, and the operation efficiency is low.
In a first aspect, an embodiment of the present application provides a method for selecting an electronic card, including:
acquiring current scene information, and determining a scene type according to the scene information;
and selecting a candidate electronic card matched with the scene type as a target electronic card.
In a possible implementation manner of the first aspect, the obtaining current context information and determining a context type according to the context information includes:
receiving a scene image fed back by the intelligent glasses;
identifying a photographic subject contained within the scene image;
and determining the scene type according to all the shooting subjects.
In a possible implementation manner of the first aspect, the obtaining current context information and determining a context type according to the context information includes:
collecting environmental sound under a current scene;
acquiring a frequency domain spectrum of the environmental sound, and determining a sound production main body contained in the current scene according to a frequency value contained in the frequency domain spectrum;
and determining the scene type according to all the sounding subjects.
In a possible implementation manner of the first aspect, the obtaining current context information and determining a context type according to the context information includes:
acquiring current position information and extracting scene keywords contained in the position information;
respectively calculating the confidence probability of each candidate scene according to the confidence degrees of the candidate scenes associated with all the scene keywords;
and selecting the candidate scene with the highest confidence probability as the scene type corresponding to the position information.
In a possible implementation manner of the first aspect, the selecting a candidate electronic card matching the scene type as a target electronic card includes:
respectively calculating the matching degree between each candidate electronic card and the scene type;
and selecting the candidate electronic card with the highest matching degree as the target electronic card.
In a possible implementation manner of the first aspect, after the selecting a candidate electronic card matching the scene type as a target electronic card, the method further includes:
executing card swiping authentication operation through the target electronic card and card swiping equipment;
if the card swiping authentication fails, selecting the candidate electronic card with the highest matching degree from all the candidate electronic cards except the target electronic card as a new target electronic card, and returning to execute the card swiping operation executed by the target electronic card and the card swiping equipment until the card swiping authentication succeeds.
In a possible implementation manner of the first aspect, the selecting a candidate electronic card matching the scene type as a target electronic card includes:
acquiring a standard scene of each candidate electronic card;
and matching the scene type with each standard scene, and determining the target electronic card according to a matching result.
In a second aspect, an embodiment of the present application provides an electronic card selecting device, including:
the scene type determining unit is used for acquiring current scene information and determining a scene type according to the scene information;
and the electronic card selecting unit is used for selecting the candidate electronic card matched with the scene type as a target electronic card.
In a third aspect, an embodiment of the present application provides a terminal device, a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the electronic card selecting method according to any one of the first aspect when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program is implemented, when executed by a processor, to implement the electronic card selecting method according to any one of the above first aspects.
In a fifth aspect, an embodiment of the present application provides a computer program product, which when running on a terminal device, causes the terminal device to execute the method for selecting an electronic card according to any one of the above first aspects.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
According to the electronic card selecting method and device, when the electronic card needs to be called for authentication, payment and other operations, the current scene information is acquired through the terminal device, the scene type is determined according to the scene object contained in the scene information, the electronic card related to the scene type is selected from all candidate electronic cards to serve as the target electronic card, the purpose of automatically selecting the electronic card is achieved, and the operation efficiency and the response speed of the electronic card are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a block diagram of a partial structure of a mobile phone provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a software structure of a mobile phone according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for selecting an electronic card according to a first embodiment of the present application;
FIG. 4 is a schematic diagram illustrating scene type identification based on a scene image according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an electronic card according to an embodiment of the present application;
fig. 6 is a flowchart illustrating an implementation of a method S301 for selecting an electronic card according to a second embodiment of the present application;
fig. 7 is a schematic view of a shooting scene range of a terminal device in a card swiping process according to an embodiment of the present application;
fig. 8 is a schematic view of a shooting range of smart glasses during a card swiping process according to another embodiment of the present application;
fig. 9 is a flowchart illustrating an implementation of a method S301 for selecting an electronic card according to a third embodiment of the present application;
fig. 10 is a flowchart illustrating an implementation of a method S301 for selecting an electronic card according to a fourth embodiment of the present application;
fig. 11 is a flowchart illustrating an implementation of a method S302 for selecting an electronic card according to a fifth embodiment of the present application;
fig. 12 is a flowchart illustrating an implementation of a method S302 for selecting an electronic card according to a sixth embodiment of the present application;
FIG. 13 is a schematic diagram illustrating an electronic card selection system according to an embodiment of the present application;
fig. 14 is a block diagram illustrating an electronic card selecting apparatus according to an embodiment of the present disclosure;
fig. 15 is a schematic diagram of a terminal device according to another embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The embodiment of the application provides an electronic card selecting method, device, equipment and storage medium, when the electronic card needs to be called for authentication, payment and other operations, the current scene information is acquired through terminal equipment, the scene type is determined according to a scene object contained in the scene information, and the electronic card associated with the scene type is selected from all candidate electronic cards as a target electronic card, so that the aim of automatically selecting the electronic card is fulfilled, and the operation efficiency and the response speed of the electronic card are improved.
The technical solution of the present application will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
The electronic card selection method provided by the embodiment of the application can be applied to mobile phones, tablet computers, wearable devices, vehicle-mounted devices, Augmented Reality (AR)/Virtual Reality (VR) devices, notebook computers, ultra-mobile personal computers (UMPC), netbooks, Personal Digital Assistants (PDA) and other terminal devices, and can also be applied to databases, servers and service response systems based on terminal artificial intelligence.
For example, the terminal device may be a Station (ST) in a WLAN, and may be a cellular phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a Wireless Local Loop (WLL) station, a Personal Digital Assistant (PDA) device, a handheld device with Wireless communication capability, a computing device or other processing device connected to a Wireless modem, a computer, a laptop, a handheld communication device, a handheld computing device, and/or other devices for communicating on a Wireless system, and a next generation communication system, such as a Mobile terminal in a 5G Network or a Mobile terminal in a future evolved Public Land Mobile Network (PLMN) Network, and so on.
By way of example and not limitation, when the terminal device is a wearable device, the wearable device may also be a generic term for intelligently designing daily wearing by applying wearable technology, developing wearable devices, such as gloves configured with a near field communication module, watches, and the like. The wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user, and performs operations such as payment and authentication by attaching to the user through a pre-bound electronic card. The wearable device is not only a hardware device, but also realizes powerful functions through software support, data interaction and cloud interaction. The generalized wearing type intelligent device has the advantages that the generalized wearing type intelligent device is complete in function and large in size, can realize complete or partial functions without depending on a smart phone, such as a smart watch or smart glasses, and only is concentrated on a certain application function, and the generalized wearing type intelligent device needs to be matched with other devices such as the smart phone for use, such as various smart watches with display screens, smart bracelets and the like.
In this embodiment, the terminal device may be a mobile phone 100 having a hardware structure as shown in fig. 1, and as shown in fig. 1, the mobile phone 100 may specifically include: a Radio Frequency (RF) circuit 110, a memory 120, an input unit 130, a display unit 140, a sensor 150, an audio circuit 160, a short-range wireless communication module 170, a processor 180, and a power supply 190. Those skilled in the art will appreciate that the configuration of the handset 100 shown in fig. 1 does not constitute a limitation of the terminal device, which may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 1:
the RF circuit 110 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then processes the received downlink information to the processor 180; in addition, the data for designing uplink is transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 110 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE)), e-mail, Short Messaging Service (SMS), and the like.
The memory 120 may be used to store software programs and modules, and the processor 180 executes various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 120. The memory 120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 120 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Specifically, the memory 120 may store card information of electronic cards and a corresponding relationship between each electronic card and an associated scene type, and the mobile phone may determine a target electronic card associated with a current scene through the memory 120.
The input unit 130 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone 100. Specifically, the input unit 130 may include a touch panel 131 and other input devices 132. The touch panel 131, also referred to as a touch screen, may collect touch operations of a user on or near the touch panel 131 (e.g., operations of the user on or near the touch panel 131 using any suitable object or accessory such as a finger or a stylus pen), and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 131 may include two parts, i.e., a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 180, and can receive and execute commands sent by the processor 180. In addition, the touch panel 131 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 130 may include other input devices 132 in addition to the touch panel 131. In particular, other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 140 may be used to display information input by a user or information provided to the user and various menus of the mobile phone. The Display unit 140 may include a Display panel 141, and optionally, the Display panel 141 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 131 can cover the display panel 141, and when the touch panel 131 detects a touch operation on or near the touch panel 131, the touch operation is transmitted to the processor 180 to determine the type of the touch event, and then the processor 180 provides a corresponding visual output on the display panel 141 according to the type of the touch event. Although the touch panel 131 and the display panel 141 are shown as two separate components in fig. 1 to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 131 and the display panel 141 may be integrated to implement the input and output functions of the mobile phone.
The handset 100 may also include at least one sensor 150, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 141 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 141 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here. Optionally, the mobile phone may obtain, through a learning algorithm, measured values of the sensors when the user performs a card swiping operation, so as to determine in advance whether the user needs to perform the card swiping operation before the mobile phone approaches the card swiping device, and acquire current scene information to determine a scene type, thereby further improving the selection efficiency of the electronic card.
Audio circuitry 160, speaker 161, and microphone 162 may provide an audio interface between the user and the handset. The audio circuit 160 may transmit the electrical signal converted from the received audio data to the speaker 161, and convert the electrical signal into a sound signal for output by the speaker 161; on the other hand, the microphone 162 converts the collected sound signal into an electrical signal, which is received by the audio circuit 160 and converted into audio data, which is then processed by the audio data output processor 180 and then transmitted to, for example, another cellular phone via the RF circuit 110, or the audio data is output to the memory 120 for further processing.
Communication technologies such as WiFi, bluetooth and Near Field Communication (NFC) belong to short-range wireless transmission technologies, and a mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through a short-range wireless module 170, and provides wireless broadband internet access for the user. The short-range wireless module 170 may include a WiFi chip, a bluetooth chip, and an NFC chip, and the WiFi chip may implement a WiFi Direct connection function between the mobile phone 100 and other terminal devices, and also enable the mobile phone 100 to operate in an AP mode (Access Point mode) capable of providing wireless Access service and allowing other wireless devices to Access or in an STA mode (Station mode) capable of connecting to an AP and not accepting wireless device Access, so as to establish peer-to-peer communication between the mobile phone 100 and other WiFi devices; the mobile phone can establish a short-distance communication link with the card swiping equipment through the NFC chip, send the card information of the electronic card written in advance to the card swiping equipment according to the short-distance communication link, execute subsequent card swiping operation, feed back a card swiping result to the mobile phone, and output the card swiping result through a display module of the mobile phone.
The processor 180 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 120 and calling data stored in the memory 120, thereby integrally monitoring the mobile phone. Alternatively, processor 180 may include one or more processing units; preferably, the processor 180 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 180.
The handset 100 also includes a power supply 190 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 180 via a power management system, such that the power management system may be used to manage charging, discharging, and power consumption.
The handset 100 may also include a camera. Optionally, the position of the camera on the mobile phone may be front-located or rear-located, which is not limited in this embodiment of the present application. The mobile phone can acquire a scene image of a current scene through the camera, and determine scene information and a scene type by analyzing the scene image.
The software system of the mobile phone 100 may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present invention uses an Android system with a layered architecture as an example to exemplarily illustrate a software structure of the mobile phone 100.
Fig. 2 is a block diagram of a software configuration of the mobile phone 100 according to the embodiment of the present application. The Android system is divided into four layers, namely an application program layer, an application program Framework (FWK), a system layer and a hardware abstraction layer, and the layers communicate with each other through a software interface.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver. In some embodiments, the core layer further comprises a PCIE driver.
In the embodiment of the present application, the main execution body of the process is a device configured with a near field communication module. By way of example and not limitation, the device configured with the nfc module may specifically be a terminal device, and the terminal device may be a mobile terminal such as a smartphone, a tablet computer, and a notebook computer used by a user. Fig. 3 shows a flowchart of an implementation of a method for selecting an electronic card according to a first embodiment of the present application, which is detailed as follows:
in S301, current scene information is acquired, and a scene type is determined according to the scene information.
In this embodiment, the terminal device may acquire current scene information through an information acquisition module such as an internal sensor, or may receive scene information acquired by other information acquisition devices by establishing a data link with an external information acquisition device.
In a possible implementation manner, a camera module is arranged in the terminal device, the camera module can be a front camera module and/or a rear camera module, a scene image of a current scene is acquired through the camera module, the scene image is identified as scene information, the scene information is analyzed, and a scene type is determined; a microphone module is arranged in the terminal equipment, and can collect scene audio of a current scene, identify the scene audio as scene information, perform audio analysis on the scene audio and determine a scene type; the terminal equipment is internally provided with a positioning module, acquires positioning information through the positioning module, takes the positioning information as scene information, and determines a related scene type according to the positioning information.
In a possible implementation manner, the specific implementation manner of the terminal device determining the scene type through the scene image may be: the terminal device may be configured with corresponding standard images for different scene types. The terminal device can match the currently acquired scene image with each standard image, and determine the scene type associated with the scene image according to the matching result. Specifically, the process of matching the scene image with the standard image may specifically be: the terminal equipment can perform gray processing on a scene image, convert the scene image into a monochromatic image, generate an image array corresponding to the monochromatic image according to the pixel value and the pixel coordinate of each pixel point, import the image array into a preset convolution neural network, perform pooling and dimension reduction operation on the convolution array through preset convolution check, generate an image feature vector corresponding to the image array, calculate the vector distance between the image feature vector and a standard feature vector corresponding to each standard image, use the vector distance as a matching probability value with each standard image, and select a scene type associated with the standard image with the maximum probability value as the scene type of the scene image. The standard feature vector of the standard image can be acquired through a self-learning algorithm, and the implementation manner of the self-learning algorithm can be as follows: when each electronic card is initially bound, the terminal equipment acquires a standard image related to a use scene corresponding to the electronic card, generates the standard characteristic vector based on the standard image, and in the subsequent use process, when the electronic card is used to execute a card swiping operation, the scene image corresponding to the card swiping operation is led into the neural network, and the generated standard characteristic vector is adjusted, so that the configured standard characteristic vector can be subjected to posterior adjustment in the use process every time, and the accuracy of the standard characteristic vector is improved.
In one possible implementation, there is one cloud server corresponding to each electronic card. The cloud server can be used for storing operation records of the electronic card and scene images associated with the operation records. The cloud server extracts historical scene images from each operation record, and generates the standard characteristic vectors through all the historical scene images. The cloud server can send the standard characteristic vectors to each terminal device in a preset updating period, the standard characteristic vectors can be associated with electronic card identifications, the terminal devices store the received electronic card identifications and the standard characteristic vectors in the storage unit, and in subsequent matching operation, the standard characteristic vectors can be extracted to execute matching operation.
Illustratively, fig. 4 shows a schematic view of scene type identification based on a scene image according to an embodiment of the present application. Referring to fig. 4, the mobile terminal 41 includes a terminal device 41 and a card swiping device 42, and a camera module 411 and a near field communication module 412 are disposed on the mobile terminal 41. When the terminal device 41 approaches the card swiping device 42, the near field communication module 412 detects a near field communication signal sent by the card swiping device 42, and establishes a communication connection with the card swiping device 42, at this time, the terminal device may activate the camera module 411, collect a scene image in a current scene through the camera module 411, and determine a scene type according to the scene image.
In a possible implementation manner, the terminal device collects scene information of a plurality of different types, and determines a current scene type according to the scene information of the different types. Specifically, the terminal device may collect a scene image and a scene audio of a current scene, identify a plurality of candidate object types through the scene image, screen out a target object type from the candidate object types according to the scene audio, and determine the scene type according to the target object type. The invalid candidate object types are screened through the scene audio, the recognition process of the scene types can be calibrated, and therefore recognition efficiency is improved. For example, the terminal device obtains a scene image through the camera module, and due to factors such as too long shooting distance or barrier shielding, part of scene objects cannot be obtained through scene image recognition, so that accuracy of scene type recognition is reduced. In order to solve the problems, when the terminal device acquires a scene image, the terminal device can acquire environmental sound under the current scene through the microphone module, determine a sounding subject through the environmental sound, determine a shooting object through image recognition on the scene image, and determine a scene type through the sounding subject and the shooting object.
Specifically, in a possible implementation manner, the manner of determining the scene type through the sound-emitting subject and the shooting subject may be: the terminal equipment determines a first confidence coefficient of each candidate scene type through all the sound-producing subjects, determines a second confidence coefficient of each candidate scene type through all the shooting objects, weights the first confidence coefficient based on the voice weight, weights the second confidence coefficient based on the image weight, calculates the matching degree of each candidate scene type according to the weighted first confidence coefficient and the weighted second confidence coefficient, and selects the candidate scene type with the highest matching degree as the scene type of the current scene.
For example, the electronic card may be divided into three different scene types, namely a bank type, a bus type and an access control type, according to the scene type associated with the electronic card stored in the terminal device. When the terminal equipment detects that an electronic card needs to be called, a scene image of a current scene can be obtained through the camera module, and through an image recognition technology, three shooting objects including a teller machine, a bank mark and a shielding door are determined to be contained in the scene image, so that first confidence degrees corresponding to three candidate scene types are as follows: (bank type, 80%), (entrance guard type, 50%), (traffic type, 20%), and through collecting the ambient sound, it is determined that the sound-producing subject included in the scene includes cases and mechanical operation sound, so that the second confidence degrees corresponding to the three candidate scene types are: (bank type, 60%), (entrance type, 50%), (traffic type, 60%), and the preset image weight value is 1, and the voice weight value is 0.8, so the matching degrees corresponding to the above three candidate scene types are (bank type, 80% × 1+60 × 0.8 ═ 1.28), (entrance type, 50% × 1+50 × 0.8 ═ 0.9), (traffic type, 20% × 1+60 × 0.8 ═ 0.68), and therefore, the candidate scene with the highest matching degree is the bank type, and thus the bank type is taken as the current scene type.
It should be noted that, the above is described by using the combination of the two scene information of the voice type and the image type to determine the scene type as an example, and when actually used, more than two scene information may be used, or other scene information not limited to the two scene information types may be used to determine the scene type, which is not described herein again.
In a possible implementation manner, a user may trigger a selection process of an electronic card by clicking an electronic card activation control or opening an electronic card application, and the terminal device may also trigger the selection process of the electronic card through the near field communication module when detecting that a near field communication signal is present.
In a possible implementation manner, the terminal device may obtain an action track of a user when the user performs a card swiping operation through the terminal device through a built-in learning algorithm, so that when the movement track of the current terminal is detected to be consistent with the learned action track, a selection process of an electronic card is automatically activated, thereby achieving the purpose of selecting the electronic card in advance and improving subsequent response speed. The specific implementation process is as follows: the terminal equipment can continuously acquire the parameter values of the motion sensor, store the parameter values corresponding to all the acquisition moments in the motion parameter queue according to the sequence of the acquisition moments, and continuously update the motion parameter queue according to the sequence of first-in first-out. And if the terminal equipment detects that the user executes the card swiping operation, acquiring the card swiping operation time and all parameter values in the motion parameter queue, and generating a motion track corresponding to the motion parameter queue at the card swiping time. The terminal equipment can lead the motion trail corresponding to the historical card swiping operation into the machine learning model, so that a recognition model of the card swiping operation can be generated. When the terminal equipment is used, all parameter values in the parameter motion queue are led into the card swiping operation identification model, whether card swiping actions exist or not is judged, and if yes, the electronic card selecting process is executed; otherwise, the parameter values of the motion sensor are continuously acquired, and the parameter motion queue is updated. It should be noted that, the terminal device may update the card swiping operation identification model each time the card swiping operation is executed, so as to improve the identification accuracy.
In S302, a candidate electronic card matching the scene type is selected as a target electronic card.
In this embodiment, a user may bind a plurality of electronic cards on the terminal device, where each bound electronic card is the aforementioned candidate electronic card. The electronic card can be bound in the following manner: the user can input the identification of the entity card into the terminal device, and send the authorization information of the entity card to the cloud server of the entity card through the electronic card control of the terminal device, for example, a bound mobile phone number or user identity information is input, the cloud server can feed back the corresponding authorization code to the terminal device after detecting that the authorization information is legal, and the terminal device associates the authorization code with the electronic card corresponding to the entity card generated in the terminal device, so that the electronic card corresponding to the entity card is created in the terminal device.
In this embodiment, the terminal device may configure the associated scene type for different candidate electronic cards. After the scene type corresponding to the scene information is determined, whether the current scene type is matched with the scene type of each candidate electronic card or not can be judged, namely whether the scene type associated with the electronic card is consistent with the current scene type or not, the candidate electronic card with the consistent scene type is used as a target electronic card, and the subsequent card swiping operation is executed.
In a possible implementation manner, if the scene types associated with the plurality of candidate electronic cards are the same, the candidate electronic card with the highest priority may be selected as the target electronic card according to the priority of each candidate electronic card. For example, fig. 5 shows a schematic diagram of selecting an electronic card according to an embodiment of the present application. The terminal equipment is bound with four electronic cards of a bank card A, a bank card B, a bus card and an entrance guard card. The terminal device determines that the current scene type is a bank type by acquiring current scene information, and the scene types associated with the bank card A and the bank card B are both bank types, namely the scene types of the two electronic cards are both matched with the current scene type, under the condition, the priorities corresponding to the two bank cards can be obtained, and if the priority of the bank card A is higher than that of the bank card B, the bank card A can be selected as a target electronic card.
In a possible implementation manner, if the scene types associated with the multiple candidate electronic cards are the same, the matching degrees of the multiple candidate electronic cards with the same scene types can be calculated according to the current card swiping time and the current card swiping place, and one candidate electronic card with the highest matching degree is selected as the target electronic card. Specifically, different electronic cards have corresponding use habits, for example, a user uses the electronic card a to conduct card swiping operation in the morning, and uses the electronic card B to conduct card swiping operation in the afternoon, the terminal device can calculate the matching degree with the current scene according to the historical time and the historical place in the historical card swiping record of each electronic card, and selects a candidate electronic card with a higher matching degree as the target electronic card.
It can be seen from the above that, according to the electronic card selection method provided by the embodiment of the application, when the electronic card needs to be called for authentication, payment and other operations, the current scene information is acquired through the terminal device, the scene type is determined according to the scene object included in the scene information, and the electronic card associated with the scene type is selected from all candidate electronic cards as the target electronic card, so that the purpose of automatically selecting the electronic card is achieved, and the operation efficiency and the response speed of the electronic card are improved.
Fig. 6 shows a flowchart of a specific implementation of an electronic card selecting method S301 according to a second embodiment of the present application. Referring to fig. 6, with respect to the embodiment shown in fig. 3, S301 of the method for selecting an electronic card provided in this embodiment includes: s601 to S603 are specifically detailed as follows:
further, the acquiring current scene information and determining a scene type according to the scene information includes:
in S601, a scene image fed back by the smart glasses is received.
In this embodiment, the terminal device establishes a communication connection with an external smart glasses, and acquires a scene image in a current scene through a camera module built in the smart glasses. Because the intelligent glasses are worn near the eyes of the user, compared with the situation that a camera module arranged in the terminal equipment is used for collecting scene images, the sight is clear, the consistency between the scenes watched by the user is high, the situation that a main scene main body is shielded by other objects when the main scene main body is shot is reduced, and therefore the accuracy of scene type identification is improved. For some scenes, for example, in a traffic scene type, when a user uses an electronic card to board a bus, the user often takes out a mobile phone from a bag of clothes or trousers and then directly performs a card swiping operation, and under a moving path from a pocket to a position close to the card swiping machine, a camera module built in a terminal device cannot acquire a scene image containing the card swiping machine at a large probability.
Exemplarily, fig. 7 shows a schematic diagram of a shooting scene range of a terminal device in a card swiping process according to an embodiment of the present application. Referring to fig. 7, the initial position of the terminal device is in the pocket, and when the card is required to be swiped, the terminal device needs to be taken out of the pocket and close to the card swiping machine, namely, the target position is close to the card swiping machine. During the movement, the photographed area is as shown by the sector area in fig. 7. Therefore, only when the terminal equipment is close to the card swiping machine, the shot scene image contains the card swiping equipment, and only partial image of the card swiping equipment exists, so that the identification accuracy is low.
Exemplarily, fig. 8 shows a schematic diagram of a shooting range of smart glasses during a card swiping process according to another embodiment of the present application. As shown in fig. 8, the smart glasses are worn on the eye region of the user, the shooting range of the smart glasses is substantially consistent with the visual range of human eyes, and the card swiping machine can be continuously recorded by the smart glasses during the process of the advancing direction of the user, namely the process of the user approaching the card swiping device, so that compared with a built-in camera module using terminal equipment, the environment image collected by the smart glasses has a better recognition effect.
In a possible implementation manner, when detecting that a preset scene information acquisition condition is met, the terminal device may send an acquisition instruction to the smart glasses, and after receiving the acquisition instruction, the smart glasses may execute an image acquisition operation and feed back the acquired image to the terminal device, so that the terminal device may obtain the scene image. Specifically, the scene information collection condition may be: when the terminal equipment detects that the current scene contains the near field communication signal, identifying that the scene information acquisition condition is met; or the terminal device records a plurality of card swiping places according to historical card swiping operations, and when the current position is detected to reach the stored card swiping place, the condition that the scene information acquisition condition is met is identified.
In a possible implementation manner, the smart glasses may acquire a current scene image in a preset acquisition period, and feed back the acquired scene image to the terminal device, and the terminal device may identify a shooting subject in the scene image, determine whether the shooting subject includes a target subject related to a card swiping operation, and if so, execute the operation of S603.
In this embodiment, wireless communication can be established between terminal equipment and the intelligent glasses, and specifically, the intelligent glasses are built-in to have wireless communication module, such as WiFi module, bluetooth module, ZigBee module of ZigBee etc. correspondingly, terminal equipment also can be built-in to have corresponding wireless communication module. The terminal equipment searches a wireless network of the intelligent glasses and joins the wireless network, so that a wireless communication link with the intelligent glasses is established.
In S602, a photographic subject included in the scene image is identified.
In this embodiment, the terminal device may analyze the shooting subject included in the scene image through an image analysis algorithm, where the manner of determining the shooting subject may specifically be: the method comprises the steps of dividing a scene image into a plurality of main body areas by identifying contour lines contained in the scene image, and determining the main body type of a shooting main body corresponding to each main body area according to the contour shape and the color characteristics of the main body areas.
In one possible implementation, the terminal device may be configured with a list of body types, and for each body type a corresponding body model is associated. The terminal device can match each subject area with each subject model, and selects the subject type of the subject model with the highest matching degree as the shooting subject corresponding to the subject area.
In a possible implementation manner, the terminal device may perform a preprocessing operation on the scene image before analyzing the scene image, so that the accuracy of the recognition of the shooting subject can be improved. Specifically, the way of the preprocessing operation may be: the terminal device performs gray processing on the scene image, namely, a color image is converted into a monochrome image, the monochrome image is adjusted through a filter and the actual light intensity of the shot scene, for example, the pixel value of a highlight area is increased, the pixel value of a shadow area is reduced, the contour line contained in the scene image is determined through a contour recognition algorithm, and the contour line area is subjected to deepening processing, so that the separation of each shot subject and the determination of the contour characteristic of each shot subject can be facilitated.
In S603, the scene type is determined from all the subjects.
In this embodiment, the terminal device may calculate the matching factors of each candidate type according to the shooting subjects obtained through recognition, and superimpose the matching factors of all the shooting subjects to determine the matching degree of each candidate type. And selecting a candidate scene with the highest matching degree as a scene type corresponding to the scene image.
In a possible implementation manner, the terminal device may determine a weight value corresponding to each shooting subject according to an area occupied by each shooting subject in the scene image. The larger the area occupied by the shooting main body in the scene image is, the higher the corresponding weight value is; and otherwise, the smaller the occupied area is, the lower the corresponding weight value is, and the matching degree of each candidate type is determined by performing weighted superposition on the basis of the matching factors and the weight values between each shooting subject and the candidate type.
For example, a subject captured in a scene image includes: cash dispenser, shield door, bank sign and entity people to the area that the above-mentioned main part of shooing accounts for whole scene image respectively does: 25%, 30%, 8%, and 15%, the terminal device may convert the area ratio into corresponding weight values, which are: 2,2,1 and 1.5. The matching factors between the four shooting subjects and the bank scene types are respectively as follows: 100%, 80%, 100%, and 30%, therefore the matching degree between the scene image and the bank scene type is specifically: 2 × 100% +2 × 80% +1 × 100% +1.5 × 30% ═ 5.05.
In the embodiment of the application, the scene image is collected through the intelligent glasses, and the shooting main body contained in the scene image is analyzed, so that the current scene type is determined, the automatic identification of the scene type is realized, the identification accuracy of the scene type is further improved, and the accuracy of electronic card selection is improved.
Fig. 9 shows a flowchart of a specific implementation of an electronic card selecting method S301 according to a third embodiment of the present application. Referring to fig. 9, with respect to the embodiment described in fig. 3, S301 in the method for selecting an electronic card provided in this embodiment includes: s901 to S903, are specifically detailed as follows:
further, the acquiring current scene information and determining a scene type according to the scene information includes:
in S901, ambient sounds in a current scene are collected.
In this embodiment, the terminal device may collect the environmental sound of the current scene through an internal or external microphone module. Specifically, when detecting that a preset scene information acquisition condition is met, the terminal device may send a scene information acquisition instruction to the microphone module. The process of triggering the scene type identification operation based on the scene information acquisition condition may refer to the related description of the above embodiment, and is not described herein again.
In one possible implementation, the user wears an earphone control, the earphone control includes a first microphone module, and a communication link is established between the terminal device and the earphone control. In this case, the terminal device may control the first microphone module and the built-in second microphone module of the earphone control to collect the ambient sound, and determine the ambient sound in the current scene based on the ambient sound collected by the two microphone modules. Specifically, the manner of determining the ambient sound of the current scene based on the two paths of ambient sounds may be: the terminal device detects a first signal-to-noise ratio of first environment sound acquired by the first microphone module, determines a second signal-to-noise ratio of second environment sound acquired by the second microphone module, judges the two signal-to-noise ratios, and selects the environment sound with a larger signal-to-noise ratio as the environment sound in the current scene. The larger the signal-to-noise ratio is, the smaller the influence of the noise signal is when the environmental sound is collected, so that the accuracy rate is higher in the subsequent process of determining the sound-producing main body.
In S902, a frequency domain spectrum of the environmental sound is obtained, and a sound emission subject included in the current scene is determined according to a frequency value included in the frequency domain spectrum.
In this embodiment, the terminal device may perform fourier transform on the ambient sound, convert the time domain signal into a frequency domain signal, obtain a frequency domain spectrum corresponding to the ambient sound, and determine the sound emission subject included in the scene based on the frequency values included in the frequency domain spectrum and the frequency domain amplitudes corresponding to the frequency values. Since different objects have fixed sounding frequencies, the terminal device can determine different sounding subjects according to different frequency values. For example, the sound frequency of human body is 8-10KHz, and the sound frequency of buzzer is fixed at 2 KHz. Therefore, by converting the ambient sound into a frequency domain signal, the sound emission subject corresponding to the ambient sound can be determined.
In a possible implementation manner, the terminal device may determine a weight value corresponding to each sounding subject, where the manner of determining the weight value may be: the terminal equipment identifies the amplitude of each sounding main body in the frequency domain spectrum, and determines the weight value of each sounding main body based on the amplitude.
In S903, the scene type is determined from all the utterances.
In this embodiment, after determining the sounding subject, the terminal device may calculate the matching factors of each candidate type according to the recognized sounding subject, and superimpose the matching factors of all the sounding subjects to determine the matching degree of each candidate type. And selecting a candidate scene with the highest matching degree as the scene type corresponding to the environmental sound.
In the embodiment of the application, the environmental sound is collected through the microphone, and the sound production main body contained in the environmental sound is analyzed, so that the current scene type is determined, the automatic identification of the scene type is realized, and the accuracy rate of electronic card selection is improved.
Fig. 10 shows a flowchart of a specific implementation of an electronic card selecting method S301 according to a fourth embodiment of the present application. Referring to fig. 10, with respect to the embodiment shown in fig. 3, in the method for selecting an electronic card provided in this embodiment, S301 includes: s1001 to S1003 are specifically described as follows:
further, the acquiring current scene information and determining a scene type according to the scene information includes:
in S1001, current position information is acquired, and a scene keyword included in the position information is extracted.
In this embodiment, a positioning module is built in the terminal device, the current positioning coordinate of the terminal device can be determined by the positioning module, and the position information associated with the positioning coordinate can be obtained by a third-party map server or a map application. For example, if the current positioning coordinates obtained by the terminal device are (113.300562,23.143292), the current positioning coordinates may be input to a corresponding map application, and the position information associated with the positioning coordinates may be obtained, for example, the position information corresponding to the positioning coordinates is: the bank a in the area of city a and city B can determine the current scene type by the position information containing the text content.
In this embodiment, the terminal device may extract the scene keyword from the location information through a semantic recognition algorithm. In a possible implementation manner, the terminal device may delete the characters related to the area, retain the characters related to the scene, and use the characters related to the scene as the scene keyword. For example, the determined location information is: and the bank G with the street XX number of the area C in the area B in the city A can determine that the street XX number of the area C in the area B in the city A is the character related to the area through a semantic recognition algorithm, delete the character related to the scene, namely the bank G, and use the bank G as the scene keyword of the mulberry.
In S1002, the confidence probabilities of the candidate scenes are respectively calculated according to the confidence of the candidate scenes associated with all the scene keywords.
In this embodiment, the terminal device may respectively calculate the confidence degrees between each scene keyword and each candidate scene, and calculate the confidence probability between the position information and each candidate scene according to the confidence degrees of all scene keywords. For example, if the position information includes a scene keyword a and a scene keyword B, the confidence degrees between the position information and the first candidate scene are divided into 80% and 60%, the terminal device may superimpose the two confidence degrees, or may calculate an average value between the two confidence degrees, and the calculation result is used as the confidence probability of the first candidate scene.
In a possible implementation manner, the terminal device may configure corresponding keyword lists for different candidate scenes, and the terminal device may determine whether the scene keywords are in the keyword lists of the candidate scenes, and determine the confidence level based on the determination result. Specifically, if the scene keyword is in the keyword list of the candidate scene, identifying that the confidence between the scene keyword and the candidate scene is 100%; otherwise, judging whether any character exists in the scene keywords in the keyword list of the candidate scene, and determining the confidence between the scene keywords and the candidate scene based on the number of the contained characters.
In S1003, the candidate scene with the highest confidence probability is selected as the scene type corresponding to the location information.
In this embodiment, the terminal device may select a candidate scene with the highest confidence as the scene type matched with the location information.
In the embodiment of the application, the position information is determined, the semantic analysis is carried out on the position information, the scene keywords are determined, and the confidence probability of each candidate scene is determined based on the scene keywords, so that the current scene type is determined, the automatic identification of the scene type is realized, and the accuracy of electronic card selection is improved.
Fig. 11 shows a flowchart of a specific implementation of an electronic card selecting method S302 according to a fifth embodiment of the present application. Referring to fig. 11, with respect to any one of the embodiments shown in fig. 3, 6, 9 and 10, the method for selecting an electronic card in the embodiment includes step S302: s1101 to S1102 are specifically described below:
further, the selecting a candidate electronic card matching the scene type as a target electronic card includes:
in S1101, a matching degree between each of the candidate electronic cards and the scene type is calculated, respectively.
In this embodiment, after the terminal device determines the scene type, the matching degree between each existing candidate electronic card in the terminal device may be calculated respectively. Specifically, the terminal device may store standard scenes of each candidate electronic card, where each standard scene may correspond to at least one scene tag, and establish a corresponding tag tree based on a range of the scene tag. For example, for a certain traffic electronic card, the following scenario tags are associated: the "regional bus", "public transport" and "traffic" can be determined according to the range size contained in the scene label, and the "bus" is a general name covering various regional bus types such as "regional bus", "city bus", and the like, namely the range of the "bus" is larger than that of the "regional bus", so that the "bus" is the parent node of the "regional bus", and so on, and can be constructed into a label tree. The terminal device may configure the corresponding matching degree according to the size of the range, wherein the smaller the range is, the higher the corresponding matching degree is. The terminal device may determine whether the current scene type matches any scene tag of the candidate electronic card, and based on a matching degree associated with the matched scene tag, the terminal device may determine a matching degree between the scene type and the candidate electronic card.
In S1102, the candidate electronic card with the highest matching degree is selected as the target electronic card.
In this embodiment, since the matching degree is used for identifying the association relationship between each candidate electronic card and the current scene, the higher the matching degree is, the stronger the association relationship between the candidate electronic card and the current scene is; on the contrary, the lower the matching degree is, the weaker the association relationship between the candidate electronic card and the current scene is. Based on the method, the terminal equipment can select the candidate electronic card with the highest matching degree as the target electronic card, and automatic selection of the electronic card is achieved.
In the embodiment of the application, the matching degree between each candidate electronic card and the scene type is calculated, and the candidate electronic card with the highest matching degree is selected as the target electronic card, so that the selection accuracy of the target electronic card is improved.
Further, as another embodiment of the present application, after S302, S1103 and S1104 may be further included:
in S1103, a card swiping authentication operation is performed with the card swiping device through the target electronic card.
In this embodiment, after determining the target electronic card, the terminal device may send the card information of the target electronic card to the card swiping device through a near field communication link with the card swiping device, so as to perform card swiping authentication on the target electronic card, and determine whether the target electronic card is matched with the card swiping device. If the matching is successful, performing subsequent operations such as authentication, authorization, fee deduction and the like, wherein the subsequent operations are related to the operation type initiated by the user, for example, the target electronic card is a traffic type electronic card, and the traffic fee can be paid through the traffic electronic card; if the target electronic card is an entrance guard type electronic card, the door opening authorization can be carried out through the entrance guard electronic card. If the card swiping authentication is detected to fail, the operation of S1104 is executed.
In S1104, if the card swiping authentication fails, selecting the candidate electronic card with the highest matching degree from all the candidate electronic cards except the target electronic card as a new target electronic card, and returning to execute the card swiping operation executed by the target electronic card and the card swiping equipment until the card swiping authentication succeeds.
In this embodiment, if the terminal device receives the authentication failure information fed back by the card swiping device, it indicates that the currently selected target electronic card is not matched with the current scene type, and therefore, the target electronic card needs to be re-determined from the candidate electronic cards. Therefore, the terminal device can select the candidate electronic card with the highest matching degree value as the target electronic card and re-execute the card swiping authentication operation until the card swiping authentication is successful.
In the embodiment of the application, when the card swiping fails, the candidate electronic card with the second highest matching degree is automatically selected as the target electronic card, so that the aim of automatically replacing the electronic card is fulfilled, and the operation of a user is reduced.
Fig. 12 is a flowchart illustrating a specific implementation of an electronic card selecting method S302 according to a sixth embodiment of the present application. Referring to fig. 12, with respect to any one of the embodiments shown in fig. 3, fig. 6, fig. 9, and fig. 10, the method for selecting an electronic card in the embodiment includes step S302: s1201 to S1202 are specifically described as follows:
further, the selecting a candidate electronic card matching the scene type as a target electronic card includes:
in S1201, a standard scenario of each of the candidate electronic cards is acquired.
In this embodiment, when the terminal device stores each candidate electronic card, the terminal device may determine an associated standard scenario according to user settings or based on the electronic card type, establish a standard scenario index table, and obtain a standard scenario pre-associated with each candidate electronic card based on the standard scenario index table after determining the scenario type of the current scenario.
In S1202, the scene type is matched with each of the standard scenes, and the target electronic card is determined according to a matching result.
In this embodiment, the terminal device may match the currently identified scene type with each standard scene, determine whether the standard scene of any candidate electronic card is consistent with the current scene type, and identify the candidate electronic card as the target electronic card if the standard scene of any candidate electronic card is consistent with the current scene type.
In the embodiment of the application, the standard scene is associated with the standard scene for different candidate electronic cards, the standard scene is matched with the scene type, and the target electronic card is determined, so that the target electronic card is automatically selected, the operation difficulty of a user is reduced, and the card swiping efficiency is improved.
Fig. 13 is a schematic structural diagram illustrating an electronic card selecting system according to an embodiment of the present application. Referring to fig. 13, the electronic card selecting system includes a mobile terminal 131, smart glasses 132, an external microphone 133, and a card swiping device 134, wherein a communication connection is established between the mobile terminal 131 and the smart glasses 132 as well as between the external microphone 133, and a communication connection is established between the mobile terminal 131 and the card swiping device 134 through a near field communication module. The mobile terminal 131 is internally provided with a camera module 1311, a positioning module 1312 and an internal microphone module 1313, and the mobile terminal 131 can acquire scene information of different types through the modules, it should be noted that the mobile terminal 131 can call any one module or external equipment to acquire one scene information, or can acquire a plurality of scene information through two or more modules or external equipment, determine a scene type based on the scene information, and select a target electronic card based on the scene type.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 14 is a block diagram of an electronic card selecting device according to an embodiment of the present application, which illustrates only a portion related to the embodiment of the present application for convenience of description.
Referring to fig. 14, the electronic card selecting device includes:
a scene type determining unit 141, configured to obtain current scene information and determine a scene type according to the scene information;
and an electronic card selecting unit 142, configured to select a candidate electronic card that matches the scene type as a target electronic card.
Optionally, the scene type determining unit 141 includes:
the scene image acquisition unit is used for receiving a scene image fed back by the intelligent glasses;
a scene image analysis unit configured to identify a photographic subject included in the scene image;
and the shooting subject analyzing unit is used for determining the scene type according to all the shooting subjects.
Optionally, the scene type determining unit 141 includes:
the environment sound acquisition unit is used for acquiring environment sound in the current scene;
the sounding body determining unit is used for acquiring the frequency domain frequency spectrum of the environmental sound and determining the sounding body contained in the current scene according to the frequency value contained in the frequency domain frequency spectrum;
and the sound generation main body analysis unit is used for determining the scene type according to all the sound generation main bodies.
Optionally, the scene type determining unit 141 includes:
the scene keyword extraction unit is used for acquiring current position information and extracting scene keywords contained in the position information;
the confidence probability calculation unit is used for respectively calculating the confidence probability of each candidate scene according to the confidence degrees of the candidate scenes associated with all the scene keywords;
and the scene type selecting unit is used for selecting the candidate scene with the highest confidence probability as the scene type corresponding to the position information.
Optionally, the electronic card selecting unit 142 includes:
the matching degree calculation unit is used for calculating the matching degree between each candidate electronic card and the scene type respectively;
and the matching degree selecting unit is used for selecting the candidate electronic card with the highest matching degree as the target electronic card.
Optionally, the electronic card selecting device further includes:
the card swiping authentication unit is used for executing card swiping authentication operation through the target electronic card and the card swiping equipment;
and the authentication failure response unit is used for selecting the candidate electronic card with the highest matching degree from all the candidate electronic cards except the target electronic card as a new target electronic card if the card swiping authentication fails, and returning to execute the card swiping operation executed by the target electronic card and the card swiping equipment until the card swiping authentication succeeds.
Optionally, the electronic card selecting unit 142 includes:
the standard scene obtaining unit is used for obtaining the standard scene of each candidate electronic card;
and the standard scene matching unit is used for matching the scene type with each standard scene and determining the target electronic card according to a matching result.
Therefore, the electronic card selection device provided in the embodiment of the present application may also determine quantization accuracies corresponding to different network levels by obtaining network information of the target neural network before generating the target neural network, configure a preprocessing function for converting data formats between different accuracies based on the quantization accuracy of the current level and the quantization accuracy of the previous level, and generate the target neural network according to the preprocessing function, thereby implementing processing of data of different accuracies in the same target neural network, solving the problem of compatibility of neural networks with mixed accuracies, and improving the operation efficiency.
Fig. 15 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 15, the terminal device 15 of this embodiment includes: at least one processor 150 (only one shown in fig. 15), a memory 151, and a computer program 152 stored in said memory 151 and operable on said at least one processor 150, said processor 150 implementing the steps in any of the above described embodiments of the method of selecting an electronic card when executing said computer program 152.
The terminal device 15 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 150, a memory 151. Those skilled in the art will appreciate that fig. 15 is merely an example of the terminal device 15, and does not constitute a limitation to the terminal device 15, and may include more or less components than those shown, or combine some components, or different components, such as an input/output device, a network access device, and the like.
The Processor 150 may be a Central Processing Unit (CPU), and the Processor 150 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 151 may in some embodiments be an internal storage unit of the terminal device 15, such as a hard disk or a memory of the terminal device 15. In other embodiments, the memory 151 may also be an external storage device of the terminal device 15, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the terminal device 15. Further, the memory 151 may also include both an internal storage unit and an external storage device of the terminal device 15. The memory 151 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer programs. The memory 151 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present application further provides a network device, where the network device includes: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor implementing the steps of any of the various method embodiments described above when executing the computer program.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.