Multi-cloud heterogeneous data processing method and device and electronic equipment
1. A multi-cloud heterogeneous data processing method is characterized by comprising the following steps:
acquiring heterogeneous monitoring data of at least one adaptive object every preset time length;
determining adaptation rules corresponding to the adaptation objects from a preset rule table through a preset adaptation unit;
converting the heterogeneous monitoring data of each adaptation object according to the adaptation rule corresponding to each adaptation object to obtain standardized data corresponding to the heterogeneous monitoring data of each adaptation object;
and processing the standardized data corresponding to the heterogeneous monitoring data of each adaptive object according to a preset monitoring processing rule to obtain a monitoring result, and processing each adaptive object according to the monitoring result.
2. The method according to claim 1, wherein the determining, by a preset adaptation unit, an adaptation rule corresponding to each adaptation object from a preset rule table includes:
and determining any one or more adaptation rules of a custom field value, a field name, a field type, a unit conversion rule and a field matching priority corresponding to each adaptation object from a preset rule table through a preset adaptation unit.
3. The method according to claim 1, wherein the obtaining heterogeneous monitoring data of at least one adaptation object every preset time period comprises:
the heterogeneous monitoring data of at least one adaptive object is obtained through a message queue or a representation state transition interface every preset time length, wherein the at least one adaptive object comprises adaptive objects of different types and adaptive objects of different dimensions in the same type.
4. The method according to claim 1, wherein the processing the standardized data corresponding to the heterogeneous monitoring data of each adapted object according to a preset monitoring processing rule to obtain a monitoring result comprises:
and processing the standardized data corresponding to the heterogeneous monitoring data of each adaptive object according to any one or more of a statistical index algorithm, an instantaneous event algorithm, a continuous event algorithm, a delay event algorithm and a data tilt problem algorithm to obtain a monitoring result.
5. The method according to claim 4, wherein the processing the adapted objects according to the monitoring result comprises:
if the monitoring result is data abnormity, inquiring target monitoring data corresponding to the abnormal target adaptation object according to the universal inquiry interface and/or the universal aggregation interface to obtain an inquiry result, and sending the inquiry result to the terminal equipment for displaying;
and if the monitoring result is that the data is normal, sending a normal data prompt to the terminal equipment.
6. The method according to any one of claims 1 to 5, wherein after the processing the standardized data corresponding to the heterogeneous monitoring data of each adapted object according to a preset monitoring processing rule to obtain a monitoring result, the method further comprises:
and storing the monitoring result into a non-relational database.
7. A multi-cloud heterogeneous data processing apparatus, comprising:
the acquisition module is used for acquiring heterogeneous monitoring data of at least one adaptive object every preset time length;
the processing module is used for determining the adaptation rules corresponding to the adaptation objects from the preset rule table through the preset adaptation units;
the processing module is further configured to perform conversion processing on the heterogeneous monitoring data of each adapted object according to the adaptation rule corresponding to each adapted object to obtain standardized data corresponding to the heterogeneous monitoring data of each adapted object;
the processing module is further configured to process the standardized data corresponding to the heterogeneous monitoring data of each adapted object according to a preset monitoring processing rule to obtain a monitoring result, and process each adapted object according to the monitoring result.
8. An electronic device, comprising: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the memory-stored computer-executable instructions cause the at least one processor to perform the cloudy heterogeneous data processing method of any one of claims 1 to 6.
9. A computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement the cloudy heterogeneous data processing method according to any one of claims 1 to 6.
10. A computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the method of cloudy heterogeneous data processing according to any one of claims 1 to 6.
Background
With the development of internet technology, the cloud computing era comes formally, and through years of development, more and more cloud merchants are involved in the cloud computing industry.
In the prior art, when a customer implements a service through a cloud provider, especially when the customer relates to a service in monitoring an alarm, multiple cloud providers may be selected to implement the service together. Generally, the implementation process of the conventional monitoring alarm function generally includes modules of data acquisition, data processing, data display and the like, and the data volume is small and the data format is single.
However, for mass heterogeneous monitoring data of multiple cloud merchants, each cloud merchant relates to multiple products, each product needs to be monitored from multiple dimensions, the data volume to be processed is large, problems of non-uniform data formats and the like easily occur, the uniform monitoring of the cloud heterogeneous data cannot be realized in the prior art, and normal realization of services is influenced.
Disclosure of Invention
The embodiment of the invention provides a method and a device for processing multi-cloud heterogeneous data and electronic equipment, which are used for realizing unified monitoring on the multi-cloud heterogeneous data so as to ensure normal realization of a service.
In a first aspect, an embodiment of the present invention provides a method for processing cloudy heterogeneous data, including:
acquiring heterogeneous monitoring data of at least one adaptive object every preset time length;
determining adaptation rules corresponding to the adaptation objects from a preset rule table through a preset adaptation unit;
converting the heterogeneous monitoring data of each adaptation object according to the adaptation rule corresponding to each adaptation object to obtain standardized data corresponding to the heterogeneous monitoring data of each adaptation object;
and processing the standardized data corresponding to the heterogeneous monitoring data of each adaptive object according to a preset monitoring processing rule to obtain a monitoring result, and processing each adaptive object according to the monitoring result.
Optionally, the determining, by the preset adaptation unit, the adaptation rule corresponding to each adaptation object from the preset rule table includes:
and determining any one or more adaptation rules of a custom field value, a field name, a field type, a unit conversion rule and a field matching priority corresponding to each adaptation object from a preset rule table through a preset adaptation unit.
Optionally, the obtaining heterogeneous monitoring data of at least one adaptation object every preset time duration includes:
the heterogeneous monitoring data of at least one adaptive object is obtained through a message queue or a representation state transition interface every preset time length, wherein the at least one adaptive object comprises adaptive objects of different types and adaptive objects of different dimensions in the same type.
Optionally, the processing the standardized data corresponding to the heterogeneous monitoring data of each adapted object according to a preset monitoring processing rule to obtain a monitoring result includes:
and processing the standardized data corresponding to the heterogeneous monitoring data of each adaptive object according to any one or more of a statistical index algorithm, an instantaneous event algorithm, a continuous event algorithm, a delay event algorithm and a data tilt problem algorithm to obtain a monitoring result.
Optionally, the processing the adapted objects according to the monitoring result includes:
if the monitoring result is data abnormity, inquiring target monitoring data corresponding to the abnormal target adaptation object according to the universal inquiry interface and/or the universal aggregation interface to obtain an inquiry result, and sending the inquiry result to the terminal equipment for displaying;
and if the monitoring result is that the data is normal, sending a normal data prompt to the terminal equipment.
Optionally, after the processing the standardized data corresponding to the heterogeneous monitoring data of each adapted object according to the preset monitoring processing rule to obtain the monitoring result, the method further includes:
and storing the monitoring result into a non-relational database.
In a second aspect, an embodiment of the present invention provides a multi-cloud heterogeneous data processing apparatus, including:
the acquisition module is used for acquiring heterogeneous monitoring data of at least one adaptive object every preset time length;
the processing module is used for determining the adaptation rules corresponding to the adaptation objects from the preset rule table through the preset adaptation units;
the processing module is further configured to perform conversion processing on the heterogeneous monitoring data of each adapted object according to the adaptation rule corresponding to each adapted object to obtain standardized data corresponding to the heterogeneous monitoring data of each adapted object;
the processing module is further configured to process the standardized data corresponding to the heterogeneous monitoring data of each adapted object according to a preset monitoring processing rule to obtain a monitoring result, and process each adapted object according to the monitoring result.
In a third aspect, an embodiment of the present invention provides an electronic device, including: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executing the computer-executable instructions stored by the memory causes the at least one processor to perform the method of cloud heterogeneous data processing according to any one of the first aspects.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the method for processing cloudy heterogeneous data according to any one of the first aspects is implemented.
In a fifth aspect, an embodiment of the present invention provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the cloud heterogeneous data processing method according to the first aspect and various possible designs of the first aspect.
The embodiment of the invention provides a method, a device and electronic equipment for processing multi-cloud heterogeneous data, after the scheme is adopted, heterogeneous monitoring data of at least one adaptive object can be obtained at intervals of preset time, then an adaptive rule corresponding to each adaptive object is determined from a preset rule table through a preset adaptive unit, then the heterogeneous monitoring data of each adaptive object is converted according to the adaptive rule corresponding to each adaptive object to obtain standardized data corresponding to the heterogeneous monitoring data of each adaptive object, finally the standardized data corresponding to the heterogeneous monitoring data of each adaptive object is processed according to the preset monitoring processing rule to obtain a monitoring result, each adaptive object is processed according to the monitoring result, the heterogeneous data of a plurality of cloud merchants are uniformly adapted based on the adaptive unit to generate the standardized data, and the uniform monitoring of the multi-cloud heterogeneous data is realized, and ensures the normal realization of the service.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic diagram illustrating a multi-cloud heterogeneous data processing method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a multi-cloud heterogeneous data processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an interface for automated operation and maintenance management according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a multi-cloud heterogeneous data processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of including other sequential examples in addition to those illustrated or described. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the prior art, when a customer implements a service through a cloud provider, especially a service related to monitoring an alarm, multiple cloud providers (e.g., an a cloud provider, a B cloud provider, and a C cloud provider) may be selected to implement the service together. Generally, the implementation process of the conventional monitoring alarm function generally includes modules of data acquisition, data processing, data display and the like, and the data volume is small and the data format is single. However, for mass heterogeneous monitoring data of multiple cloud merchants, each cloud merchant may involve multiple products, each product needs to be monitored from multiple dimensions (for example, multiple dimensions such as a CPU, a hard disk, and a memory are needed), the monitoring alarm data of each cloud merchant has different quantities, qualities, formats, units, and the like, the data volume to be processed is large, and the problems of non-uniform data formats and the like easily occur.
Based on the problems, the heterogeneous data of the cloud merchants are uniformly adapted based on the adaptation units, and then standardized data is generated, so that the unified monitoring on the heterogeneous data of the cloud merchants is realized, and the technical effect of ensuring the normal realization of each service is achieved.
Fig. 1 is a schematic diagram of a principle of a cloud heterogeneous data processing method provided in an embodiment of the present invention, and as shown in fig. 1, in this embodiment, an implementation principle of the cloud heterogeneous data processing method may specifically include: and (3) multi-cloud monitoring and acquisition: the self-research cloud collects the monitoring data sink through a data collection tool telegraf or openstack to a local open source stream processing platform, and the open source stream processing platform can be kafka as an example. The heterogeneous cloud collects the monitoring data sink to the local kafka through a restful (representational state transition) interface or a message queue. Heterogeneous data: data which is not adapted by the multi-cloud-provider monitoring data is in the layer. Adaptation: the data which is not adapted by the cloud quotient is unified and standardized by a universal adaptation program (universal adaptation unit) and a preset rule table at the layer: the format is uniform and the unit is uniform. Normalization data: the data after normalization is at this level. Monitoring and alarming: and the flink/spark generates monitoring alarm information through consumption standardized raw monitoring data calculation. And (3) persistence: the monitoring alarm information is stored in a non-relational database, which may be, for example, a mongodb database or an elastic search database. API (Application Programming Interface) layer: the general query interface and the general aggregation interface satisfy the single-table query and aggregation service above a preset threshold, which may be 90% for example. Data application: the Application scenario of the universal Interface may be directly invoked by front-end and back-end research and development personnel, or may be added with an Open Application Programming Interface (OpenApi).
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 2 is a schematic flowchart of a multi-cloud heterogeneous data processing method according to an embodiment of the present invention, where the method of this embodiment may be executed by a server. As shown in fig. 2, the method of this embodiment may include:
s201: and acquiring heterogeneous monitoring data of at least one adaptive object every preset time.
In this embodiment, when a service is implemented, a plurality of adaptation objects may be involved, so that heterogeneous monitoring data of at least one adaptation object corresponding to the service may be obtained every preset time.
The preset time duration can be set according to the actual application scene in a user-defined mode, and exemplarily, the preset time duration can be any value in 2-5 seconds.
Further, the obtaining of the heterogeneous monitoring data of the at least one adaptation object every a preset time period may specifically include:
and acquiring heterogeneous monitoring data of at least one adaptive object at intervals of preset duration through a message queue or a representational state transition interface, wherein the at least one adaptive object comprises adaptive objects of different types and adaptive objects of different dimensions in the same type.
Specifically, the adaptation objects of different types may be adaptation objects of different cloud providers, for example, a cloud provider a, a cloud provider B, or a cloud provider C, and the adaptation objects of different dimensions in the same type may be different hardware devices in the same cloud provider, for example, a CPU, a hard disk, or an internal memory.
Further, the adaptation objects may include custom adaptation objects (e.g., self-developed clouds) and existing adaptation objects (e.g., heterogeneous clouds). The custom adaptation object can collect the monitoring data sink to the local kafka through telegraf or openstack, and the existing adaptation object can collect the monitoring data sink to the local kafka through a restful interface or a message queue.
S202: and determining the adaptation rules corresponding to the adaptation objects from the preset rule table through a preset adaptation unit.
The prior art also has methods for heterogeneous data normalization, such as: a set of conversion rules is preset according to business and industry experience, and various data sources are converted into standardized data according to the rules. In this embodiment, a set of adaptation programs may be pre-written according to business and industry experiences, the adaptation programs are deployed at specific locations of a memory to obtain adaptation units, the adaptation units may query a rule table at regular time, and source data is parsed into standardized data according to corresponding adaptation rules, and by only setting one set of adaptation programs, the adaptation rules may be dynamically configured, the source data may be dynamically increased, and code amount is significantly reduced. For example: the index of a plurality of dimensionalities exists in a multi-cloud provider environment, and the dimension of the cpu is only needed to be respectively configured with an adaptation rule for the cloud provider A, the cloud provider B and the cloud provider C, and the related indexes of the cpu dimensionalities of the cloud provider A, the cloud provider B and the cloud provider C can be respectively standardized by operating the same set of adaptation program, so that the development efficiency of a data access layer is greatly improved, the development time is shortened, and the human resources are saved.
In addition, for individual monitoring data which cannot be standardized by using a general adapter program, the adapter program can be set for the heterogeneous data separately.
Further, determining the adaptation rule corresponding to each adaptation object from the preset rule table through a preset adaptation unit may specifically include:
and determining any one or more adaptation rules of a custom field value, a field name, a field type, a unit conversion rule and a field matching priority corresponding to each adaptation object from a preset rule table through a preset adaptation unit.
Specifically, each index of each cloud quotient needs to be configured with an adaptation rule in a json format, if a general adaptation program is not available, a set of corresponding standardization program can be written for each index of each cloud quotient, and the workload is increased along with the addition of multiple cloud quotients. In order to reduce the workload, some common logics need to be extracted and packaged into a set of adaptation programs in the whole adaptation process, and then some adaptation rules corresponding to the source data are configured.
For the adaptation process of the cpu index of a cloud quotient, an adaptation program takes source data, analyzes an adaptation rule in a json format, and finally generates standard data of the cpu dimension. The adaptation rule comprises two parts, one is a define part, and some field values can be customized and added into standard data; one is the match part, which specifies the standardized field name, field type, etc. If other simple logics need to be put in the general adaptation layer for processing, some adaptation grammars can be dynamically added, such as: unit scaling, field matching priority, etc. For new heterogeneous data, program standardization is not required to be independently written, adaptation can be automatically completed only by configuring an adaptation rule of a json format, workload is reduced, and the specific implementation process can be as follows:
monitoring data corresponding to the cloud quotient cpu:
{
'cpuUsedRate":"0.9999175071716309*,
timestamp":"1606435560*,
"uuid":"ae84a94e-57a6-4bb6-a607-7338b75bef46"
+
a, adaptation rules corresponding to the cloud quotient CPU:
and B, corresponding standardized data after the cloud quotient CPU is adapted:
{
"cloud":"huawei",
"cpuNum":"cpu-total",
"dateTime":1606435560,
"resourceUuid":"ae84a94e-57a6-4bb6--a607--7338b75bef46"
"usage":0.9999175071716309
}
s203: and converting the heterogeneous monitoring data of each adaptation object according to the adaptation rules corresponding to each adaptation object to obtain standardized data corresponding to the heterogeneous monitoring data of each adaptation object.
In this embodiment, when consuming time series data, all scenes that need to be output by comparing the front and back data need to be completed in a cache state, that is, all scenes need to be managed in a state. The essence of state management is the operational state, including:
state initialization: the first piece of data comes from when the state needs to be initialized. And (3) updating the state: starting with the second piece of data, the state needs to be updated. State destruction: no data has come in all the time, and a timeout destruction state can be set.
The monitoring alarm function can be realized through a spark or flink distributed memory computing framework. Both spark and flink state management have mapWithSate operators, and the difference is that: the mapWithSate operator in spark can set the overtime destroy state; the mapWithSate operator in the flink can only initialize and update the state, cannot set the overtime destroy state, and if the overtime processing state is not set, the operating state of the keyedProcessFunction class needs to be inherited to rewrite the operation state of the onTimer function. For example, the maximum value, the minimum value, and the average value in the heterogeneous monitoring data may be determined through spark, and the specific implementation process may be:
the maximum value, the minimum value and the average value in the heterogeneous monitoring data can be determined through the flink, and the specific implementation process can be as follows:
// memory example
case class Mem(uuid:String,maxMem:Double,minMem:Double,
totalMem:Double,countMem:Int)
// flink uses mapWithState operator for state management
val resultDStream=memDStream.mapWithState[Option[Mem],Mem]{(input:Mem,state:0ption[Mem])=>
statematch{
State update old state + data new state
caseSome(s)=>{
valnewState=Some(Mem(
uuid=s.uuid,
maxMem=if(input.maxMem>s.maxMem)input.maxMemelses.maxlMem,
minMem=if(input.minlMem>s.minlMem)s.minlMemelseinput.minMem,
totalMem=input.totalMem+s.totalMem,
countMem=s.countMem+1
))
val out=newState
(out,newState)
Initial State// 1
case None=>{
val newState=Some(Mem(uuid=input.uuid,maxMem=input.maxMem,minMem=input.minMem,
totalMem=input.totalMem,countMem=1))
val out=newState
(out,newState)
}
}.flatMap(f=>f)
Further, after the adaptation rules corresponding to the adaptation objects are determined, the heterogeneous monitoring data can be converted according to the adaptation rules corresponding to the adaptation objects and converted into standardized data with a unified standard.
S204: and processing the standardized data corresponding to the heterogeneous monitoring data of each adaptive object according to a preset monitoring processing rule to obtain a monitoring result, and processing each adaptive object according to the monitoring result.
In this embodiment, processing the standardized data corresponding to the heterogeneous monitoring data of each adapted object according to a preset monitoring processing rule to obtain a monitoring result, which may specifically include:
and processing the standardized data corresponding to the heterogeneous monitoring data of each adaptive object according to any one or more of a statistical index algorithm, an instantaneous event algorithm, a continuous event algorithm, a delay event algorithm and a data tilt problem algorithm to obtain a monitoring result.
In this embodiment, the following scenarios may be involved:
statistical indicator algorithms, for example: specifying a maximum, minimum, average, etc. over the interval.
The statistical indicator algorithm first needs to clarify the mathematical formula of each indicator, for example: maximum value: max (x); minimum value: min (x); average value: (x1+ x2+. + xn)/n.
It is then confirmed which variables are defined, as shown in FIG. 5: maximum one variable is: maxMem, minimum one variable: minMem, the average requires two variables: totalMem, countMem. And finally generating an index result by using a memory variable before warehousing, for example: maximum value: maxMem; minimum value: minMem; average value: totalMem/countMem.
In addition, other statistical indicators, such as: variance, standard deviation, median, etc. are also similar algorithms.
Transient event algorithms, for example: if the single piece of data hits a certain threshold value, the algorithm can directly use the index value and the threshold value to carry out matching, and the transient event does not relate to state management.
Continuous event algorithms, such as: the time sequence data continuously counts a plurality of times and calculates a continuous event.
The continuous event is composed of a plurality of transient events, state management is needed, and mapWithState operators of spark and flink can be realized.
For the continuous event algorithm, each piece of data is matched with a threshold value, and then a service field is calculated by comparing the matching results of the two pieces of data before and after, so that the details of each step of the full life cycle of the continuous event can be known, the logic of the calculation process is clear, and the calculation result is accurate. Specifically, the implementation process of the continuous event algorithm may be:
// alarmMetric field in status
Monitoremetic field in New data
val(alarmTime,alarmDuration,alarmRecoveryTime)=(state,input)match{
case(true,true)=>
(alarmMetric.alarmTim,alarmMetric.alarmDuration+(info.moni torMetic.endTime-info.monitorMetic.beginTime),-1L)
case(true,false)=>
(a1armMetric.alarmTime,alarmMetric.alarmDuration,monitorMetic.endTime)
case(false,true)=>
(info.alarmMetric.alarmTime,(info.moni torMetic.endTime-info.monitorMetic.beginTime),-1L)
// Default (false )
Case-=>
(-1L,0L,-1L)}
For example: monitoring data was collected every 1 minute, requirements: if the threshold value is exceeded for more than 10 times continuously, the alarm is marked as one alarm, and the alarm time, the alarm duration and the alarm recovery time need to be calculated.
Further, each piece of data can be matched with a threshold value, and v: indicates that the threshold value is exceeded, x: the alarm time, the alarm duration and the alarm recovery time are calculated by comparing the matching results of the front data and the rear data, so that the continuous events are recorded from the perspective of the full life cycle of the continuous events, the logic of the calculation process is clear, and the calculation result is accurate.
Delay event algorithms, such as: and if the monitoring data cannot be received within 5 minutes, triggering a health state alarm. The delay event algorithm first defines a timer for a specific scenario, and the delay exceeds a specified time to trigger the relevant operation.
The spark framework supports setting of timeout time (Duration (5 × 60 × 1000))), and determines whether the state is timeout (memstate. istingout ()) in the process of state operation, so that service operation can be triggered. The flink framework needs to inherit the KeyedProcessFunction class, overwriting the onTimer function operating state.
Data tilt problem algorithms, such as: the data of a certain ID is too much, resulting in uneven data distribution. Data skew problems are often present in the context of batch offline computing, resulting in the termination of the computing task due to errors. Firstly, checking the reason of data inclination, and determining the ID of the bundled data; after the IDs causing huge data differentiation are determined, the IDs can be individually subdivided during grouping calculation, for example, the IDs are grouped according to the ID _ date, so that the data of the whole ID is distributed to the groups of the ID _ date, the data is scattered and not piled up, the calculation task can be normally completed, and finally, the calculated result can be restored to the whole ID.
In addition, processing each adapted object according to the monitoring result may specifically include:
and if the monitoring result is data abnormity, inquiring the target monitoring data corresponding to the abnormal target adaptation object according to the universal inquiry interface and/or the universal aggregation interface to obtain an inquiry result, and sending the inquiry result to the terminal equipment for displaying.
And if the monitoring result is that the data is normal, sending a normal data prompt to the terminal equipment.
Specifically, the design idea of the universal interface is to transmit parameters, i.e. grammars, the interface transmit parameters directly define conditions required by database query, and support programmable transmit parameters of specific application scenarios, and the monitoring alarm back end only needs two universal interfaces for providing data: a single table universal query interface and a single table universal aggregation interface. For example: the interface-defined parameters include: looking up the table, equivalent query, range query, inequality query, in query, fuzzy query, returning interface sorting rule, paging field, and aggregation field.
For example, the specific reference implementation manner of the universal query interface may be:
{"collection":"alarminfo_realtime",
"equalCondition":{
"orgCode":"202011097513977",
"projectId":"2243909563626381855”
},
"inCondition":{
"projectName":[
"test item 1",
"test item 2"
]
},
"rangeCondition":{
"alarmTime":[
1606752000000,
1607443200000
]
},
"likeCondition":{
"instanceName":"test"
},
"returnFields":{
"alarmDesc":”",
"orgName":"",
"projectName":"",
"alarmLvl":"",
"status":"1"
},
"orderField":{
"alarmTime":"desc"
},
"skip":0,
"limit":10}
The specific reference implementation manner of the generic aggregation interface may be:
"collection":alarm info__detail",
"equalCondition":{
"orgCode":"202011097513977",
"projectId":"2243909563626381855",
"initAlarmStatus":1
},
"rangeCondition":{
"beginTime":[
1606924800000,
1607529599000},
"subQueryFields":{|
"projectId":{"existField":1},
"projectName":{"existField":1},
"info_sum":{"existField":0,"if":{"alarmLvl":"info"},"then":1,"else":0},
"groupFields":[
"projectId",
"projectName
"returnFields":{
"total_count":0,
"info_sum":0
},
"totalNumField":{
"totalNum":0
},
"skip":0,
"limit":10}
the transmission parameter list in the universal query interface comprises the following components: requiring to query the mongodb set; equalCondition: querying in an equivalent way; inCondition: in query; and (2) rangeCondition: inquiring the range; likeCondition: fuzzy query; return fields: inquiring a return field, and supporting to set a default value; orderField: a sort field; skip, limit: a paging field. And returning data details and total number of hit data by the query result.
Group fields in the general aggregation query interface parameters: a grouping field supporting multi-level field grouping; subQueryfields: sub-query field, existField: 1 denotes the real field in the table, existField: 0 represents that no field exists in the table, and if-esle condition customization can be set; totalNumField: total number of hit data, 0: close, not return total number of bars, 1: opening, returning the total number, and recommending not to perform paging selection and closing; return fields: querying the return field, it should be noted that the aggregation interface may match the corresponding statistical algorithm according to the name of the return field, for example: the fields comprise a 'min' keyword calculation minimum value, a 'max' keyword calculation maximum value, a 'sum' keyword calculation accumulated value, an 'aver' keyword calculation average value and a 'count' keyword calculation total number. Because the data quantity of each table of the monitoring alarm data is greatly different, a differential index needs to be established in advance before a universal interface is used, and the query aggregation performance is improved. For the service of multi-table association query, a dual-table association query universal interface can be added or the redundant service field on the original table is queried and then a single-table universal interface is queried. In individual service scenes, the universal interface can not meet the requirements, and measures such as writing personalized interfaces or off-line running programs to calculate data can be taken.
After the scheme is adopted, heterogeneous monitoring data of at least one adaptation object can be obtained at intervals of preset time, then adaptation rules corresponding to the adaptation objects are determined from a preset rule table through a preset adaptation unit, then conversion processing is carried out on the heterogeneous monitoring data of the adaptation objects according to the adaptation rules corresponding to the adaptation objects, standardized data corresponding to the heterogeneous monitoring data of the adaptation objects are obtained, finally, standardized data corresponding to the heterogeneous monitoring data of the adaptation objects are processed according to the preset monitoring processing rules, monitoring results are obtained, the adaptation objects are processed according to the monitoring results, the heterogeneous data of cloud merchants are uniformly adapted based on the adaptation unit, and further standardized data are generated, so that uniform monitoring on the heterogeneous data of multiple clouds is achieved, and normal realization of businesses is guaranteed.
Based on the method of fig. 2, the present specification also provides some specific embodiments of the method, which are described below.
In addition, in another embodiment, after S204, the method may further include:
and storing the monitoring result into a non-relational database.
In this embodiment, the relational database may be a mongodb database or an elastic search database. The mongodb database and the elastic search database are document type non-relational databases, can store complex data types, and support powerful query languages, can realize most functions of single-table query of similar relational databases, and also support establishment of indexes on data.
In addition, in another embodiment, the program for implementing the monitoring alarm process may be implemented by a spark, a flink, or other distributed framework, and the distributed program needs to be run on a cluster after being developed, and currently, there are 3 common cluster deployment methods:
the first method comprises the following steps: and directly downloading the installation package installation cluster of the apache version, manually configuring parameters, manually upgrading and manually maintaining the compatibility among the components.
And the second method comprises the following steps: and an automatic operation and maintenance tool Ambari is used, the problems of parameter configuration, version upgrading, component compatibility and the like do not need to be considered, and the method is suitable for the installation and operation and maintenance of large-scale clusters.
And the third is that: the automatic operation and maintenance tool clouder Manager is used, the problems of parameter configuration, version upgrading, component compatibility and the like do not need to be considered, and the method is suitable for installation and operation and maintenance of large-scale clusters.
Fig. 3 is an interface schematic diagram of the automated operation and maintenance management provided in the embodiment of the present invention, and as shown in fig. 3, in this embodiment, an applied automated operation and maintenance tool is Cloudera Manager, and the Cloudera Manager has functions of cluster automated installation, centralized management, cluster monitoring, alarming, and the like. The Cloudera Manager supports the automatic installation, operation and maintenance deployment of large-scale clusters of 200-500 nodes, the installation period is short, and the paged operation and maintenance are more convenient. In addition, programs related to the monitoring alarm can be uniformly submitted to the yarn component, yarn is configured with a Capacity Scheduler strategy, resources of calculation tasks such as spark, flight and hive are uniformly scheduled, and the utilization rate of the resources is improved.
Based on the same idea, an embodiment of the present specification further provides a device corresponding to the foregoing method, and fig. 4 is a schematic structural diagram of a multi-cloud heterogeneous data processing device provided in an embodiment of the present invention, as shown in fig. 4, the method may include:
the obtaining module 401 is configured to obtain heterogeneous monitoring data of at least one adapted object every preset time.
In this embodiment, the obtaining module 401 is further configured to:
the heterogeneous monitoring data of at least one adaptive object is obtained through a message queue or a representation state transition interface every preset time length, wherein the at least one adaptive object comprises adaptive objects of different types and adaptive objects of different dimensions in the same type.
A processing module 402, configured to determine, from the preset rule table, an adaptation rule corresponding to each adapted object through a preset adaptation unit.
In this embodiment, the processing module 402 is further configured to:
and determining any one or more adaptation rules of a custom field value, a field name, a field type, a unit conversion rule and a field matching priority corresponding to each adaptation object from a preset rule table through a preset adaptation unit.
The processing module 402 is further configured to perform conversion processing on the heterogeneous monitoring data of each adapted object according to the adaptation rule corresponding to each adapted object, so as to obtain standardized data corresponding to the heterogeneous monitoring data of each adapted object.
The processing module 402 is further configured to process the standardized data corresponding to the heterogeneous monitoring data of each adapted object according to a preset monitoring processing rule to obtain a monitoring result, and process each adapted object according to the monitoring result.
In this embodiment, the processing module 402 is further configured to:
and processing the standardized data corresponding to the heterogeneous monitoring data of each adaptive object according to any one or more of a statistical index algorithm, an instantaneous event algorithm, a continuous event algorithm, a delay event algorithm and a data tilt problem algorithm to obtain a monitoring result.
Further, the processing module 402 is further configured to:
and if the monitoring result is data abnormity, inquiring the target monitoring data corresponding to the abnormal target adaptation object according to the universal inquiry interface and/or the universal aggregation interface to obtain an inquiry result, and sending the inquiry result to the terminal equipment for displaying.
And if the monitoring result is that the data is normal, sending a normal data prompt to the terminal equipment.
Moreover, in another embodiment, the processing module 402 is further configured to:
and storing the monitoring result into a non-relational database.
The apparatus provided in the embodiment of the present invention may implement the method in the embodiment shown in fig. 2, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 5 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention, and as shown in fig. 5, a device 500 according to the embodiment includes: at least one processor 501 and memory 502. The processor 501 and the memory 502 are connected by a bus 503.
In a specific implementation, the at least one processor 501 executes the computer-executable instructions stored in the memory 502, so that the at least one processor 501 executes the method in the above-described method embodiments.
For a specific implementation process of the processor 501, reference may be made to the above method embodiments, which implement the similar principle and technical effect, and this embodiment is not described herein again.
In the embodiment shown in fig. 5, it should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The memory may comprise high speed RAM memory and may also include non-volatile storage NVM, such as at least one disk memory.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The embodiment of the invention also provides a computer-readable storage medium, wherein a computer execution instruction is stored in the computer-readable storage medium, and when a processor executes the computer execution instruction, the multi-cloud heterogeneous data processing method of the method embodiment is realized.
An embodiment of the present invention further provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the method for processing the multi-cloud heterogeneous data is implemented.
The computer-readable storage medium may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. Readable storage media can be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary readable storage medium is coupled to the processor such the processor can read information from, and write information to, the readable storage medium. Of course, the readable storage medium may also be an integral part of the processor. The processor and the readable storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and the readable storage medium may also reside as discrete components in the apparatus.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:传感器产品元数据符合性测试应用系统