Ramp signal control optimization method and system based on reinforcement learning

文档序号:9596 发布日期:2021-09-17 浏览:69次 中文

1. A ramp signal control optimization method based on reinforcement learning is characterized by comprising the following steps:

the method comprises the following steps of performing ramp intersection optimization control, namely detecting real-time traffic flow on a road through single-point self-adaptive control, selecting a ramp signal control scheme by an upper computer according to the real-time traffic flow, and establishing an SARSA signal control model;

a model parameter calibration step, namely acquiring a car following and track changing model, and calibrating parameters of the car following and track changing model;

and a simulation step, training the SARSA signal control model and the vehicle following and lane changing model after the parameters are calibrated according to preset requirements, and obtaining an optimized ramp signal control scheme.

2. The reinforcement learning-based ramp signal control optimization method according to claim 1,

the establishment of the SARSA signal control model comprises the following steps:

(1) round setting: taking each signal period as one step and five periods as one round;

(2) establishing a behavior space: optimizing a period and a green signal ratio;

(3) establishing a state space: selecting three indexes of the main line upstream vehicle number, the ramp vehicle number and the main line downstream occupancy rate at the end of the signal period to form a state space, and performing discretization processing on the occupancy rate;

(4) a behavior selection mechanism, which cancels a greedy exploration mechanism;

(5) reward function: and the average running speed of the vehicles in the road network in one period is taken as a reward.

3. The reinforcement learning-based ramp signal control optimization method according to claim 1,

and calibrating parameters of the vehicle following and track changing model by using the particle swarm algorithm and data provided by the NGSIM data set.

4. The reinforcement learning-based ramp signal control optimization method according to claim 3,

the particle swarm algorithm comprises the following steps:

initializing the position and speed of each particle;

calculating the fitness of each particle;

updating individual optimality and group optimality;

updating the speed and position of each particle;

and judging whether the terminal conditions are met or not, if so, ending, and if not, returning to the step of calculating the fitness of each particle.

5. The reinforcement learning-based ramp signal control optimization method according to claim 4,

the iterative formula for velocity and position is:

vi(t+1)=ω*vi(t)+c1*rand1()*(pbesti-xi(t))+c2*rand2()*(gbesti-xi(t))

xi(t+1)=xi(t)+vi(t)

in the formula xiIs the position of the particle pi at the time t, viIs the speed of change of the position of the particle pi, pbestiIs the optimal position searched for by the ith particle so far; gbestiIs the optimal position searched by the whole particle swarm so far; omega is the inertia factor, c1And c2Self-learning rate rand as a learning factor1() And social cognitive learning rate rand2()。

6. The reinforcement learning-based ramp signal control optimization method according to claim 1,

the vehicle following and track changing model comprehensively considers the power of the expected acceleration of the driver and the resistance formed by the front vehicle obstacle.

7. The reinforcement learning-based ramp signal control optimization method according to claim 6,

wherein the content of the first and second substances,is the following vehicle plus the speed, S is the head interval at the time t, S is the expected interval, S0Is the initial headway, v0Is the ideal driving speed, σ is the acceleration index, a is the desired maximum acceleration, b is the desired maximum deceleration, vn(t) is the speed at time t of the following vehicle,. DELTA.vn(t) is the speed difference between the following vehicle and the preceding vehicle at time t, sn(t) is at time tThe distance between the car heads.

8. A reinforcement learning-based ramp signal control optimization system is characterized by comprising:

the system comprises a ramp intersection optimization control module, a single-point self-adaptive control detection module, an upper computer and a data processing module, wherein the ramp intersection optimization control module is used for detecting real-time traffic flow on a road, selecting a ramp signal control scheme according to the real-time traffic flow and establishing an SARSA signal control model;

the model parameter calibration module is used for acquiring a car following and track changing model and calibrating parameters of the car following and track changing model;

and the simulation module trains the SARSA signal control model and the vehicle following and lane changing model after the parameters are calibrated according to preset requirements to obtain an optimized ramp signal control scheme.

9. A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions which, when executed by the processor, cause the processor to perform the steps of the reinforcement learning-based ramp signal control optimization method according to any one of claims 1 to 7.

10. A computer-readable storage medium storing computer instructions which, when executed by one or more processors, perform the steps of the reinforcement learning-based ramp signal control optimization method according to any one of claims 1 to 7.

Background

With the increasing of the quantity of motor vehicles kept in cities in China, the demand of expressway traffic is increasing, so that the traffic volume of a road network needs to be improved by an optimized management and control method under the limit of limited road conditions and traffic capacity.

The entrance ramp control has a very large role: firstly, the travel time of all vehicles in the whole expressway system can be reduced; secondly, the traffic flow is even and smooth; third, conflicts and incidents in intersections are eliminated or reduced.

The prior art has the following disadvantages: the existing ramp control method is mainly oriented to expressways and has the defects of traffic model construction, model parameter calibration, dependence on prior knowledge, control hysteresis and the like.

Disclosure of Invention

The purpose of the invention is realized by the following technical scheme.

The first aspect provides a ramp signal control optimization method based on reinforcement learning, which comprises the following steps:

the method comprises the following steps of performing ramp intersection optimization control, namely detecting real-time traffic flow on a road through single-point self-adaptive control, selecting a ramp signal control scheme by an upper computer according to the real-time traffic flow, and establishing an SARSA signal control model;

a model parameter calibration step, namely acquiring a car following and track changing model, and calibrating parameters of the car following and track changing model;

and a simulation step, training the SARSA signal control model and the vehicle following and lane changing model after the parameters are calibrated according to preset requirements, and obtaining an optimized ramp signal control scheme.

Further, the establishing of the SARSA signal control model includes:

(1) round setting: taking each signal period as one step and five periods as one round;

(2) establishing a behavior space: optimizing a period and a green signal ratio;

(3) establishing a state space: selecting three indexes of the main line upstream vehicle number, the ramp vehicle number and the main line downstream occupancy rate at the end of the signal period to form a state space, and performing discretization processing on the occupancy rate;

(4) a behavior selection mechanism, which cancels a greedy exploration mechanism;

(5) reward function: and the average running speed of the vehicles in the road network in one period is taken as a reward.

And further calibrating parameters of the vehicle following and track changing model by using the particle swarm algorithm and data provided by the NGSIM data set.

Further, the particle swarm algorithm comprises the following steps:

initializing the position and speed of each particle;

calculating the fitness of each particle;

updating individual optimality and group optimality;

updating the speed and position of each particle;

and judging whether the terminal conditions are met or not, if so, ending, and if not, returning to the step of calculating the fitness of each particle.

Further, the iterative formula for velocity and position is:

vi(t+1)=ω*vi(t)+c1*rand1()*(pbesti-xi(t))+c2*rand2()*(gbesti-xi(t))xi(t+1)=xi(t)+vi(t)

in the formula xiIs the position of the particle pi at the time t, viIs the speed of change of the position of the particle pi, pbestiIs the ith particle so farSearching the optimal position; gbestiIs the optimal position searched by the whole particle swarm so far; omega is the inertia factor, c1And c2Self-learning rate rand as a learning factor1() And social cognitive learning rate rand2()。

Further, the vehicle following and track changing model comprehensively considers the power of the driver expecting to accelerate and the resistance formed by the front vehicle obstacle.

Further, the air conditioner is provided with a fan,

wherein the content of the first and second substances,is the acceleration of the following vehicle, S is the distance between the heads at t, S is the expected distance, S0Is the initial headway, v0Is the ideal driving speed, σ is the acceleration index, a is the desired maximum acceleration, b is the desired maximum deceleration, vn(t) is the speed at time t of the following vehicle,. DELTA.vn(t) is the speed difference between the following vehicle and the preceding vehicle at time t, sn(t) is the headway distance at time t.

A second aspect provides a reinforcement learning-based ramp signal control optimization system, including:

the system comprises a ramp intersection optimization control module, a single-point self-adaptive control detection module, an upper computer and a data processing module, wherein the ramp intersection optimization control module is used for detecting real-time traffic flow on a road, selecting a ramp signal control scheme according to the real-time traffic flow and establishing an SARSA signal control model;

the model parameter calibration module is used for acquiring a car following and track changing model and calibrating parameters of the car following and track changing model;

and the simulation module trains the SARSA signal control model and the vehicle following and lane changing model after the parameters are calibrated according to preset requirements to obtain an optimized ramp signal control scheme.

A third aspect provides a computer device comprising a memory and a processor, the memory having stored therein computer-readable instructions, which, when executed by the processor, cause the processor to perform the steps of the reinforcement learning-based ramp signal control optimization method described above.

A fourth aspect provides a computer-readable storage medium storing computer instructions which, when executed by one or more processors, implement the steps of the reinforcement learning-based ramp signal control optimization method according to the first aspect.

The invention has the advantages that: by designing an expressway ramp signal control optimization method based on reinforcement learning, verifying the ramp control method based on traffic simulation and evaluating the effect, the SUMO simulation verification effect is used, and a new thought and method are provided for subsequent theoretical research and engineering application.

Drawings

Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:

FIG. 1 is a flowchart illustrating a reinforcement learning-based method for controlling and optimizing ramp signals on an express way according to an embodiment of the present invention;

FIG. 2 shows a flow diagram of a particle swarm algorithm according to an embodiment of the invention;

FIG. 3 is a block diagram of a reinforcement learning-based ramp signal control optimization system in one embodiment;

fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application;

fig. 5 is a schematic diagram of a storage medium provided in an embodiment of the present application.

Detailed Description

Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.

Interpretation of terms:

sumo (simulation of Urban mobility) is free and open-source traffic system simulation software, and can realize microscopic control of traffic flow, namely the running route of each vehicle on a road can be independently planned. SUMO is a microscopic, continuous road traffic simulation software developed mainly by the german space center. The software started in 2000, and was developed as an open source, microscopic road traffic simulation with the primary objective of providing a vehicle for traffic research organizations to implement and evaluate their own algorithms.

The SARSA algorithm is a method for solving a reinforcement learning control problem by using time sequence difference, belongs to the time sequence difference on-line control algorithm, and consists of a plurality of letters of S, A, R, S and A. And S, A and R respectively represent State (State), Action (Action) and Reward (Reward).

Car Following (CF) behavior is the most basic microscopic driving behavior, describing the interaction between two neighboring vehicles in a fleet of vehicles traveling on a one-way track that limits overtaking. The Following model is used for researching corresponding behaviors of Following Vehicles (FV) caused by changes of motion states of a Leading Vehicle (LV) by using a dynamic method, and understanding the traffic flow characteristics of a single lane by analyzing the way that each Vehicle follows one by one, so that a bridge is built between the microscopic behaviors of a driver and the macroscopic phenomena of traffic.

Intelligent Driver Model (IDM).

The invention provides a reinforcement learning-based control optimization method for ramp signals on an express way, which has a basic program as shown in figure 1 and comprises the following steps:

s1, optimizing and controlling steps of ramp intersection. The method comprises the steps that real-time traffic flow on a road is detected through single-point self-adaptive control, and an upper computer selects a ramp signal control scheme according to the real-time traffic flow to establish an SARSA signal control model.

The invention selects the most common signal traffic light control mode which is easy to be accepted by drivers: single point adaptive signal control. The single-point self-adaptive control detects real-time traffic flow on a road, uploads the real-time traffic flow to the upper machine position, and the upper machine position selects the optimal traffic control scheme, so that indexes such as vehicle speed, delay, occupancy rate and the like are optimized. In addition, the adaptive control can be trained by self under the current environment after being combined with reinforcement learning so as to deal with the uncertainty of the traffic flow. And controlling the vehicles on the ramp to converge, and realizing signal control by using an SARSA reinforcement learning method.

Wherein the SARSA signal control model establishment comprises the following steps:

(1) round setting: one step for each signal period and one round for five periods.

(2) Establishing a behavior space: period optimization and split optimization.

(3) Establishing a state space: and selecting three indexes of the number of vehicles on the upstream of the main line, the number of vehicles on the ramp and the occupancy of the downstream of the main line at the end of the signal period to form a state space, and discretizing the occupancy.

(4) A behavior selection mechanism: the SARSA adopted at present is a single-step update, and always randomly selects the action of the maximum q value in a certain state. To avoid bad cases, the greedy exploration mechanism is eliminated.

(5) Reward function: and the average running speed of the vehicles in the road network in one period is taken as a reward.

S2, model parameter calibration step. And acquiring a car following and track changing model, and calibrating parameters of the car following and track changing model.

The model parameter calibration step is mainly used for calibrating parameters of the IDM following model. The IDM model proposes the following two equations in consideration of the "power" that the driver desires to accelerate and the "resistance" that the front vehicle hinders from developing:

wherein the content of the first and second substances,is the acceleration of the following vehicle, S is the distance between the heads at t, S is the expected distance, S0Is the initial headway, v0Is the ideal driving speed, σ is the acceleration index, a is the desired maximum acceleration, b is the desired maximum deceleration, vn(t) is the speed at time t of the following vehicle,. DELTA.vn(t) is the speed difference between the following vehicle and the preceding vehicle at time t, sn(t) is the headway distance at time t.

And calibrating parameters of the IDM by using the particle swarm algorithm and data provided by the NGSIM data set. The flow of the particle swarm algorithm is shown in the attached figure 2, and comprises the following steps:

initializing the position and speed of each particle;

calculating the fitness of each particle;

updating individual optimality and group optimality;

updating the speed and position of each particle;

and judging whether the terminal conditions are met or not, if so, ending, and if not, returning to the step of calculating the fitness of each particle.

The iterative formula for velocity and position is:

vi(t+1)=ω*vi(t)+c1*rand1()*(pbesti-xi(t))+c2*rand2()*(gbesti-xi(t))xi(t+1)=xi(t)+vi(t)

in the formula xiIs the position of the particle pi at the time t, viIs the speed of change of the position of the particle pi, pbestiIs the optimal position searched for by the ith particle so far; gbestiIs the best of the whole particle group searched so farLocation. Omega is called the inertia factor, c1And c2Called learning factor, rand () has two in the formula, which are respectively self-cognition learning rates rand1() And social cognitive learning rate rand2()。

And (4) carrying out parameter calibration on the IDM through a particle swarm algorithm to obtain values of all parameters for utilization in a simulation stage.

S3, simulation step. And training the SARSA signal control model and the vehicle following and track changing model after the parameters are calibrated according to preset requirements to obtain an optimized ramp signal control scheme.

And the simulation step is to use SUMO simulation software to simulate after the adaptive control strategy and the parameter calibration are obtained. The simulation environment generates a certain requirement to carry out long-time training, and the simulation performances of the intersection of the upper ramp under a non-control state and a signal control state after training are compared and analyzed, so that the superiority and inferiority of the model are verified.

The method comprises the following specific implementation steps:

s1, the specific control process of the single-point adaptive control optimization is as follows:

(1) the initial signal period was 40s, the yellow light was fixed for 3s, and the green light for 15 s.

(2) Signal period optimization scheme [ -6, -5, -4, -3, -2, -1,0,1,2,3,4,5,6], units: s

(3) The split optimization scheme [ 20%, 25%, 30%, 35%, 40%, 45%, 50%, 55%, 60% ].

The signal period optimization scheme and the green signal ratio optimization scheme are 22 types and form a signal lamp optimization scheme together. Wherein the yellow light time is fixed to be 3s, and the green light time is shortest to be 10 s.

The specific steps for establishing the sarsa model are as follows:

(1) round setting: one step for each signal period and one round for five periods.

(2) Establishing a behavior space: the period optimization and the green signal ratio optimization are 22 schemes.

(3) Establishing a state space: and selecting three indexes of the number of vehicles on the upstream of the main line, the number of vehicles on the ramp and the occupancy of the downstream of the main line at the end of the signal period to form a state space, and discretizing the occupancy. For simplicity of representation, the code is as follows:

o ═ m × 1000 for m in range (31) ] # 1000 × 1000 vehicles upstream of the main line

w [ n × 10 for n in range (31) ] # number of ramp vehicles × 10

v ═ 0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1] # occupancy 150m downstream of the main line.

(4) A behavior selection mechanism: the SARSA adopted at present is a single-step update, and always randomly selects the action of the maximum q value in a certain state. To avoid bad cases, the greedy exploration mechanism is eliminated.

(5) Reward function: and the average running speed of the vehicles in the road network in one period is taken as a reward.

S2, carrying out parameter calibration on the IDM, and specifically comprising the following steps:

(1) vehicle data obtained by screening and processing through the NGSIM data set by using a particle swarm algorithm is shown in the attached figure 2.

(2) And setting algorithm-related parameters.

Omega is called an inertia factor, when the value of omega is larger, the global optimizing capability is strong, the local optimizing capability is weak, and the parameter value is 0.8 in the invention by comprehensive consideration.

c1And c2Referred to as learning factor, the value c in the present invention1=c2=2

There are two rand () in the formula, which are the self-cognitive learning rate and the social cognitive learning rate, respectively, and the value thereof is a random number between (0, 1). All values in the invention are 0.5.

(3) And (3) calibrating parameters to obtain a result:

mingap,accel,decel,tau,delta=[1.96 0.99 3.20 0.98 3.51]

s3, simulation comparison:

(1) and (4) building a simulation environment in the SUMO, and generating a certain requirement for the simulation environment by using the west-ring expressway-labor road ramp to train for a long time.

(2) The main line traffic flow and the on-ramp traffic flow are respectively defined, and the demanded vehicles are adjusted through Probability (Proavailability) parameters.

(3) After training for a simulation time of 320000s, the obtained training result is tested under the same simulation environment and requirements as those of the training.

The results obtained through simulation comparison show that compared with the intersection without control, the congestion of the main line traffic flow under the signal control after training is reduced, and therefore the feasibility of the model is proved.

As shown in fig. 3, in one embodiment, a reinforcement learning-based ramp signal control optimization system is provided, which may include:

the ramp intersection optimization control module 411 detects a real-time traffic flow on a road through single-point self-adaptive control, and the upper computer selects a ramp signal control scheme according to the real-time traffic flow and establishes an SARSA signal control model;

the model parameter calibration module 412 is used for acquiring a car following and track changing model and calibrating parameters of the car following and track changing model;

and the simulation module 413 is used for training the SARSA signal control model and the vehicle following and lane changing model after the parameters are calibrated according to preset requirements to obtain an optimized ramp signal control scheme.

The embodiment of the present application further provides an electronic device corresponding to the ramp signal control optimization method based on reinforcement learning provided in the foregoing embodiment, so as to execute the ramp signal control optimization method based on reinforcement learning. The embodiments of the present application are not limited.

Referring to fig. 4, a schematic diagram of an electronic device provided in some embodiments of the present application is shown. As shown in fig. 4, the electronic device 2 includes: the system comprises a processor 200, a memory 201, a bus 202 and a communication interface 203, wherein the processor 200, the communication interface 203 and the memory 201 are connected through the bus 202; the memory 201 stores a computer program that can be executed on the processor 200, and the processor 200 executes the computer program to execute the reinforcement learning-based ramp signal control optimization method provided by any one of the foregoing embodiments of the present application.

The Memory 201 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 203 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.

Bus 202 can be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 201 is configured to store a program, and the processor 200 executes the program after receiving an execution instruction, where the method for optimizing control of ramp signal based on reinforcement learning disclosed in any embodiment of the present application may be applied to the processor 200, or implemented by the processor 200.

The processor 200 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 200. The Processor 200 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 201, and the processor 200 reads the information in the memory 201 and completes the steps of the method in combination with the hardware thereof.

The electronic device provided by the embodiment of the application and the reinforced learning-based ramp signal control optimization method provided by the embodiment of the application have the same inventive concept and have the same beneficial effects as the method adopted, operated or realized by the electronic device.

Referring to fig. 5, the computer readable storage medium is an optical disc 30, and a computer program (i.e., a program product) is stored thereon, and when being executed by a processor, the computer program executes the ramp signal control optimization method based on reinforcement learning provided in any of the foregoing embodiments.

As shown in fig. 5, a text enhancement program may be stored in the memory; the processor performs steps implemented when executing the text enhancement program stored in the memory.

Alternatively, in other embodiments, the text enhancement program may be divided into one or more modules, and the one or more modules are stored in the memory and executed by one or more processors (in this embodiment, the processors) to implement the present invention.

It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory, or other optical and magnetic storage media, which are not described in detail herein.

The computer-readable storage medium provided by the above-mentioned embodiment of the present application and the reinforced learning-based ramp signal control optimization method provided by the embodiment of the present application have the same beneficial effects as the method adopted, operated or implemented by the application program stored in the computer-readable storage medium.

It should be noted that:

the algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. In addition, this application is not directed to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present application as described herein, and any descriptions of specific languages are provided above to disclose the best modes of the present application.

In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.

Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.

Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.

Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.

The various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in the creation apparatus of a virtual machine according to embodiments of the present application. The present application may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present application may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.

It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种采用间歇式车道的交叉口信号控制方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!