Resource scheduling method, device, equipment and computer storage medium
1. A resource scheduling method comprises the following steps:
acquiring at least one load characteristic of a target service;
determining a current pressure condition of the target service in dependence on the at least one load characteristic;
determining the number of expected resource instances according to the current pressure condition;
and scheduling the resource instances to enable the number of the resource instances of the target service to reach the expected number of the resource instances, wherein the scheduled resource instances are configured with the model files of the target service in advance.
2. The method of claim 1, wherein the at least one load characteristic comprises at least one of an amount of bytes per second, a wait queue length, and a request response time.
3. The method of claim 1 or 2, wherein the obtaining at least one load characteristic of a target service comprises: acquiring at least one load characteristic of each current resource instance of the target service;
said determining a current pressure condition of said target service in dependence on said at least one load characteristic comprises: and aiming at each load characteristic, carrying out averaging processing on the load characteristics of each current resource instance to obtain the pressure value of the load characteristic of the target service.
4. The method of claim 3, wherein determining a desired number of resource instances as a function of the current pressure condition comprises:
for each load characteristic, determining an expected resource instance number corresponding to the load characteristic according to the maximum pressure value of the load characteristic which can be borne by the target service, the pressure value of the load characteristic of the target service and the current resource instance number of the target service;
and obtaining the expected resource instance number of the target service by utilizing the expected resource instance number corresponding to each load characteristic.
5. The method of claim 1, wherein prior to the scheduling a resource instance, further comprising:
determining an instance change rate according to the expected number of the resource instances and the current number of the resource instances of the target service;
and if the instance change rate is greater than or equal to the preset system tolerance, continuing to execute the scheduling resource instances to enable the number of the resource instances of the target service to reach the expected number of the resource instances.
6. The method of claim 1, wherein the scheduling resource instances such that the number of resource instances of the target service reaches the desired number of resource instances comprises:
determining the resource instances to be added according to the expected resource instance number and the current resource instance of the target service;
and sending a service switching instruction to the resource instance to be added, so that the resource instance receiving the service switching instruction is switched to the target service by utilizing the pre-configured model file of the target service.
7. A resource scheduling apparatus, comprising:
the load acquisition unit is used for acquiring at least one load characteristic of the target service;
a scheduling decision unit for determining a current pressure condition of the target service according to the at least one load characteristic; determining the number of expected resource instances according to the current pressure condition;
and the scheduling execution unit is used for scheduling the resource instances to enable the number of the resource instances of the target service to reach the expected number of the resource instances, wherein the scheduled resource instances are configured with the model file of the target service in advance.
8. The apparatus of claim 7, wherein the at least one load characteristic comprises at least one of an amount of bytes per second, a wait queue length, and a request response time.
9. The apparatus according to claim 7 or 8, wherein the scheduling decision unit is specifically configured to obtain at least one load characteristic of each current instance of the target service; and aiming at each load characteristic, carrying out averaging processing on the load characteristics of each current instance to obtain the pressure value of the load characteristic of the target service.
10. The apparatus according to claim 9, wherein the scheduling decision unit is specifically configured to determine, for each load characteristic, an expected number of resource instances corresponding to the load characteristic according to a maximum pressure value of the load characteristic that the target service can bear, a pressure value of the load characteristic of the target service, and a current number of resource instances of the target service; and obtaining the expected resource instance number of the target service by utilizing the expected resource instance number corresponding to each load characteristic.
11. The apparatus of claim 7, wherein the scheduling decision unit is further configured to determine an instance change rate according to the number of expected resource instances and the number of current resource instances of the target service; and if the instance change rate is greater than or equal to the preset system tolerance, executing the scheduling resource instance to enable the number of the resource instances of the target service to reach the expected number of the resource instances.
12. The apparatus according to claim 7, wherein the scheduling execution unit is specifically configured to determine, according to the number of expected resource instances and a current resource instance of the target service, a resource instance to be added; and sending a service switching instruction to the resource instance to be added, so that the resource instance receiving the service switching instruction is switched to the target service by utilizing the pre-configured model file of the target service.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-6.
Background
With the continuous development of artificial intelligence technology, more and more services are in an online form. Such as online machine translation, online navigation, online speech recognition, and so forth. Online services often face unexpected traffic increases, which are conventionally done manually. However, this is usually time consuming and has a large impact on service availability.
Disclosure of Invention
In view of the above, the present disclosure provides a resource scheduling method, device, apparatus, and computer storage medium, which are used to quickly cope with a traffic increase of an online service and reduce an impact on service availability.
According to a first aspect of the present disclosure, there is provided a resource scheduling method, including:
acquiring at least one load characteristic of a target service;
determining a current pressure condition of the target service in dependence on the at least one load characteristic;
determining the number of expected resource instances according to the current pressure condition;
and scheduling the resource instances to enable the number of the resource instances of the target service to reach the expected number of the resource instances, wherein the scheduled resource instances are configured with the model files of the target service in advance.
According to a second aspect of the present disclosure, there is provided a resource scheduling apparatus, including:
the load acquisition unit is used for acquiring at least one load characteristic of the target service;
a scheduling decision unit for determining a current pressure condition of the target service according to the at least one load characteristic; determining the number of expected resource instances according to the current pressure condition;
and the scheduling execution unit is used for scheduling the resource instances to enable the number of the resource instances of the target service to reach the expected number of the resource instances, wherein the scheduled resource instances are configured with the model file of the target service in advance.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described above.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method as described above.
According to a fifth aspect of the disclosure, a computer program product comprising a computer program which, when executed by a processor, implements the method as described above.
According to the technical scheme, the number of the resource instances is scheduled by obtaining the load characteristics of the target service to determine the current pressure condition of the target service, so that automatic capacity expansion processing is realized. And the model file of the target service is configured on the resource instance in advance, so that the speed of switching the resource instance to the target service is realized. Compared with a manual capacity expansion mode, the speed of coping with the increase of the online service flow is improved, and the influence on the service availability is reduced.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a flowchart of a resource scheduling method provided in an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a resource scheduling apparatus according to an embodiment of the present disclosure;
FIG. 3 is a system scenario diagram provided by an embodiment of the present disclosure;
FIG. 4 is a block diagram of an electronic device used to implement an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flowchart of a resource scheduling method provided in an embodiment of the present disclosure, where an execution subject of the method may be a recommendation device, and the device may be an application located at a server end, or may also be a functional unit such as a Software Development Kit (SDK) or a plug-in the application located at the server end, which is not particularly limited in this embodiment of the present disclosure. As shown in fig. 1, the method may include the steps of:
at 101, at least one load characteristic of a target service is obtained.
At 102, a current pressure condition of the target service is determined as a function of the at least one load characteristic.
In 103, a desired number of resource instances is determined as a function of the current pressure condition.
In 104, the resource instances are scheduled such that the number of resource instances of the target service reaches the desired number of resource instances, wherein the scheduled resource instances are preconfigured with the model file of the target service.
According to the method and the device, the number of the resource instances is scheduled by obtaining the load characteristics of the target service to determine the current pressure condition of the target service, so that automatic capacity expansion processing is realized, and the model file of the target service is configured on the resource instances in advance, so that the speed of switching the resource instances to the target service is realized. Compared with a manual capacity expansion mode, the speed of coping with the increase of the online service flow is improved, and the influence on the service availability is reduced. The above steps are described in detail with reference to the following examples.
The above step 101, i.e. "obtaining at least one load characteristic of a target service", is first described in detail.
The target service in the present disclosure refers to an online service that needs to perform pressure monitoring and perform resource scheduling in time. The service type may be, for example, an online translation type service, an online navigation type service, an online voice recognition type service, etc., and the present disclosure is not limited to a specific service type.
In terms of selecting load characteristics, the traditional load characteristics mostly adopt such as CPU (central processing Unit) utilization, input/output (IO) packet rate of a disk, network IO packet rate, and the like. However, these conventional load characteristics do not truly reflect the load pressure of services such as online translation, and cannot conform to the characteristics of online services. Accordingly, some preferred load characteristics are provided in embodiments of the present disclosure, such as at least one of an amount of bytes per second, a wait queue length, and a request response time may be employed.
Since the target service's executors are resource instances, this step is actually to obtain at least one load characteristic of each current resource instance of the target service.
Taking the online translation service as an example, assuming that the online translation service currently has m resource instances, at least one of the byte per second amount, the waiting queue length, the request response time, and the waiting time of each resource instance is obtained.
The byte amount per second may be the byte amount of the text to be translated processed per second by the resource instance, or the byte amount transmitted to the resource instance per second.
When the distribution unit in the service system distributes the translation request to each resource instance, the translation request is firstly sent to the queue of the corresponding resource instance, and the length of the queue occupied by the request waiting for processing in the queue can be used as the length of the waiting queue, so that the load condition of the corresponding resource instance can be effectively reflected.
The request response time refers to the duration from when the service system receives the translation request to when the resource instance finishes processing the request.
When the load characteristics of the current resource instance are obtained, the address of the current resource instance may be obtained through middleware such as Zookeeper or Mycat, and the load characteristics of each resource instance are collected at regular time.
The above step 102, i.e. "determining the current pressure condition of the target service depending on the at least one load characteristic", is described in detail below.
Due to the eyeThe current pressure condition of the target service is actually a composite manifestation of the pressure conditions of the current resource instance of the target service, and each load signature has a respective manifestation of the pressure conditions. Therefore, for each load characteristic, the load characteristics of each current resource instance are averaged to obtain the pressure value of the load characteristic of the target service. Assuming that n load characteristics are obtained and the number of current resource instances of the target service is m, for each load characteristic i, obtaining the value of the load characteristic i from the m current resource instances respectively, and performing averaging processing to obtain a pressure value of the load characteristic i of the target service, which is expressed as currentMetrici. Each load characteristic may be used to obtain a pressure value for the target service.
In addition to determining the current stress condition of the target service for the load characteristics of each resource instance described above, other approaches may be employed. For example, the current stress condition of the target service is reflected by the total bytes per second of the service system, the request response time, and so on.
The above step 103, i.e. "determining the number of instances of the desired resource depending on the current pressure condition", is described in detail below.
The expected number of resource instances determined in this step is actually a determination of how many resource instances can be reached to satisfy the current stress condition.
As a preferred embodiment, for each load characteristic, the expected number of resource instances corresponding to the load characteristic may be determined according to the maximum pressure value of the load characteristic that the target service can bear, the pressure value of the load characteristic of the target service, and the current number of resource instances of the target service. And then obtaining the expected resource instance number of the target service by utilizing the expected resource instance number corresponding to each load characteristic.
For example, the expected resource instance number DesireInstance corresponding to the load characteristic iiThe following formula may be used to determine:
wherein, the currentinstant represents the current resource instance number of the target service. CurrentMetriciA pressure value representing the load characteristic i of the target service. DesiredMetriciAnd the maximum pressure value represents the maximum pressure value of the load characteristic i which can be borne by the target service, and the maximum pressure value is preset and can adopt an empirical value or an experimental value according to actual conditions. The Ceil () function represents the smallest integer greater than or equal to the specified expression that is returned.
The number of desired resource instances, finaldisirreinstant, for the target service may be determined using the following equation:
FinalDesireInstance=Max(DesireInstance1,DesireInstance2,…,DesireInstancen)
where the Max () function represents taking the maximum value.
The number of desired resource instances for the target service may be other than the above-mentioned way of taking the maximum value, such as weighting a coefficient on the basis of the mean value, or weighting a coefficient on the basis of the maximum value, etc.
The following describes in detail the above step 104, that is, "scheduling resource instances to make the number of resource instances of the target service reach the desired number of resource instances, where the scheduled resource instances are configured with the model file of the target service in advance", with reference to the embodiments.
In this step, the resource instances to be added can be determined according to the expected number of the resource instances and the current resource instance of the target service; and then sending a service switching instruction to the resource instance to be added, so that the resource instance receiving the service switching instruction is switched to the target service by utilizing the pre-configured model file of the target service.
When determining the resource instances to be added, actually taking the difference (denoted as Δ instant) between the expected number of resource instances and the current number of resource instances of the target service as an objective function, the driving schedule makes the number of instances of the target service and the expected number of instances tend to be equal.
In scheduling instances, resource instances may be selected from the idle resource instances as resource instances to be added, e.g., Δ Instance resource instances may be selected from the idle resource instances as resource instances to be added. And selecting resource instances from resource instances of other services for service switching to serve as the resource instances to be added, for example, selecting delta Instance resource instances from resource instances of other services to serve as the resource instances to be added. The two ways may also be combined, for example, X resource instances are preferentially selected from the idle resource instances, and if the idle resource instances are insufficient, (Δ instant-X) resource instances may be selected from the resource instances of other services with lower service priority as the resource instances to be added. Other alternatives are also possible and are not exhaustive here.
The added resource instances may be provided to the request distribution unit of the target service in a form of a list, so that the request distribution unit distributes the received request to each resource instance of the target service using a preset policy.
On the other hand, after determining the resource instance to be added, the resource scheduling device may send a service switching instruction to the resource instance to be added. And the resource instance receiving the service switching instruction is switched to the target service.
In a conventional implementation manner, when a resource instance switches services, steps of deactivating a source service, downloading a target service image, starting the target service, and the like are required to be completed, which requires a long time to consume, and may cause a decrease in service quality. Therefore, the present disclosure provides a preferred embodiment, where more than one model file is deployed on a resource instance in advance, where the model files at least include the model files of the target service, and all the model files are compressed and then collectively packaged, and are all deployed on the resource instance. Therefore, after receiving the service switching instruction, the resource instance can be quickly switched to the target service by using the pre-deployed model file, and the process is actually a process for converting the model file on line and does not need to start, stop and download mirror images.
Furthermore, some preprocessing functions in the target service can be stripped from the processing of the model file in advance on the resource instance and used as a preprocessing service module, and preprocessing is realized by calling the preprocessing service module in the actual service process. Taking translation service as an example, processing such as dictionary lookup, normalization, shorthand processing and the like which is not strongly associated with a translation model can be realized as a preprocessing service module, so that model files with different translation directions only need to be deployed in advance.
Furthermore, in order to cope with the frequent scheduling phenomenon caused by the small-amplitude traffic jitter, before the step 104 is executed, the following steps may be further executed: and determining the change rate of the example according to the expected number of the resource examples and the current number of the resource examples of the target service. If the instance change rate is greater than or equal to the preset system tolerance, step 104 is continued, i.e., the resource instances are scheduled so that the number of the resource instances of the target service reaches the expected number of the resource instances. Otherwise, step 104 is not performed.
For example, the example rate of change T may be determined using the following equation:
the system tolerance can be set according to the specific type of the target service, and depends on the tolerance degree of the target service to the service delay.
The above is a detailed description of the method provided by the present disclosure, and the following is a detailed description of the apparatus provided by the present disclosure with reference to the embodiments.
Fig. 2 is a schematic structural diagram of a resource scheduling apparatus according to an embodiment of the present disclosure, and as shown in fig. 2, the apparatus 200 may include: a load obtaining unit 201, a scheduling decision unit 202 and a scheduling execution unit 203. The main functions of the components are as follows:
a load obtaining unit 201, configured to obtain at least one load characteristic of the target service.
Wherein the at least one load characteristic may include at least one of an amount of bytes per second, a wait queue length, and a request response time.
A scheduling decision unit 202, configured to determine a current pressure condition of the target service according to at least one load characteristic; a desired number of resource instances is determined based on the current pressure condition.
And the scheduling execution unit 203 is configured to schedule the resource instances so that the number of the resource instances of the target service reaches the expected number of the resource instances, wherein the scheduled resource instances are configured with the model file of the target service in advance.
As a preferred embodiment, the scheduling decision unit 202 may obtain at least one load characteristic of each current instance of the target service; and aiming at each load characteristic, carrying out averaging processing on the load characteristics of each current instance to obtain the pressure value of the load characteristic of the target service.
As a preferred embodiment, the scheduling decision unit 202 may determine, for each load characteristic, an expected number of resource instances corresponding to the load characteristic according to a maximum pressure value of the load characteristic that can be borne by the target service, a pressure value of the load characteristic of the target service, and a current number of resource instances of the target service; and obtaining the expected resource instance number of the target service by utilizing the expected resource instance number corresponding to each load characteristic.
Furthermore, in order to cope with the frequent scheduling phenomenon caused by small-amplitude traffic jitter, the scheduling decision unit 202 is further configured to determine an instance change rate according to the number of expected resource instances and the number of current resource instances of the target service; and if the instance change rate is greater than or equal to the preset system tolerance, executing scheduling of the resource instances to enable the number of the resource instances of the target service to reach the expected number of the resource instances.
As a preferred embodiment, the scheduling execution unit 203 is specifically configured to determine a resource instance to be added according to the number of expected resource instances and a current resource instance of the target service; and sending a service switching instruction to the resource instance to be added, so that the resource instance receiving the service switching instruction is switched to the target service by utilizing the pre-configured model file of the target service.
For each resource instance, more than one model file is deployed on the resource instance in advance, wherein the model files at least comprise the model files of the target service, and all the model files are compressed, concentrated and packaged and are deployed on the resource instance. Therefore, after receiving the service switching instruction, the resource instance can be quickly switched to the target service by using the pre-deployed model file.
In practical application, the resource scheduling device can be accessed to the service system in a bypass mode, and low coupling of the resource scheduling device and the service system is ensured, so that the resource scheduling device supports hot plug.
As shown in fig. 3, the service system mainly includes two parts: a dispatch unit (Dispatcher) and a service unit (InferService). The Dispatcher is used for distributing the received service request to the resource instance corresponding to the service. Taking the translation service as an example, the received translation request is distributed to the resource instance of the corresponding translation service. The InferService is a unit for executing specific services and is composed of various service instances.
The ZooKeeper in the resource scheduling apparatus corresponds to the load obtaining unit 201 in fig. 2, and is configured to obtain at least one load characteristic of the target service, that is, after registering each resource instance of the InferService to the ZooKeeper, the ZooKeeper monitors a change of the resource instance and the load characteristic representing the pressure condition.
The decision center (TransformCtrl) in the resource scheduling apparatus corresponds to the scheduling decision unit 202 and the scheduling execution unit 203 in fig. 2, and is mainly used for performing scheduling decision of the resource instance and sending a service switching instruction to the resource instance.
In this embodiment, the resource scheduling apparatus further includes a communication agent (refer agent) as a communication agent of the resource scheduling apparatus and the service system. The communication agent is responsible for processing protocol adaptation, format conversion and the like between the resource scheduling device and the resource instance in the service system.
And the ZooKeeper monitors each resource instance of the target service in InferService through the InferAgent, acquires load characteristics as a service discovery result and provides the result to the transformTrl. And performing scheduling decision by using TransformCtrl, determining a resource instance of the scheduled target service, and pushing the resource instance to the Dispatcher in a form of a resource instance list. The Dispatcher distributes the request according to the resource instance list of the target service, namely distributes the request to the resource instance of the target service in the InferService.
In fig. 3, the resource scheduling apparatus may perform bypass connection with any service system in a hot-plug manner, so as to achieve flexibility.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
As shown in fig. 4, it is a block diagram of an electronic device of a resource scheduling method according to an embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 4, the apparatus 400 includes a computing unit 401 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)402 or a computer program loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data required for the operation of the device 400 can also be stored. The computing unit 401, ROM 402, and RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
A number of components in device 400 are connected to I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, or the like; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408 such as a magnetic disk, optical disk, or the like; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the device 400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 401 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 401 executes the respective methods and processes described above, such as the resource scheduling method. For example, in some embodiments, the resource scheduling method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 408.
In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 400 via the ROM 802 and/or the communication unit 409. When the computer program is loaded into RAM 403 and executed by computing unit 401, one or more steps of the resource scheduling method described above may be performed. Alternatively, in other embodiments, the computing unit 401 may be configured to perform the resource scheduling method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller 30, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility existing in the traditional physical host and virtual Private Server (VPs) service. The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:应用进程的管理方法、装置、设备及存储介质