Medical information sharing system

文档序号:9930 发布日期:2021-09-17 浏览:63次 中文

1. A medical information system sharing system characterized by: the system comprises a security center and a plurality of user terminals, wherein the security center comprises a security verification module which is used for obtaining a public and private key pair and generating public parameters and a master key of the system; extracting abstract document information from medical record documents of each patient to obtain topic keywords; selecting ID information of a user, and generating a unique determined ID number by an agreed algorithm, wherein the ID number and a subject keyword are used for indexing medical record information of the user;

the user terminal is used for requesting medical data of a patient with a corresponding ID number to the server, the user terminal interacts with a security verification module of the security center, the user terminal sends public parameters, a main key and user ID information after handshake confirmation is executed, and the security center downloads encrypted original medical data of the patient from the server after the main key and the user ID information are verified; the public parameter is used for verifying and identifying a system and a user terminal, and identifying whether the user terminal has an access right;

the safety verification module of the safety center judges whether the attribute set of the data requester of the user terminal meets the attribute threshold value in the encrypted medical data access strategy set by the patient or not, and if the attribute set of the data requester meets the attribute threshold value in the access strategy set by the patient, the medical data of the patient is successfully decrypted to obtain a data plaintext; otherwise, the decryption is unsuccessful;

the medical data comprises an image data document, the user terminal acquires a plurality of medical images and labeling information thereof in the image data document, and selects a medical image with the highest significance level from the plurality of medical images to be fused with a local detection medical image of the user terminal.

2. The system of claim 1, wherein: the labeling information of the plurality of pieces of medical image data comprises historical browsing archives of doctors.

3. The system of claim 2, wherein: the plurality of medical images are images of a predetermined region of the patient's body using an MRI, PET scan or PET/CT scan.

4. The system of claim 3, wherein: the user terminal obtains a plurality of medical images in an image data document and labeling information thereof, and selects the medical image with the highest significance level from the plurality of medical images, wherein the medical image is divided into areas according to clinical relevance, reading time is used as a parameter to judge the significance level, and the significance level is a (clinical relevance) + b (divided area) + c (reading time) + d, wherein a, b and c are weighting coefficients, and d is an offset.

5. The system of claim 4, wherein: the user terminal comprises a multimode image fusion unit and a hospital image post-processing unit, wherein the multimode image fusion unit and the medical image post-processing unit in the user terminal are mutually connected, and are connected with the medical image post-processing unit through a network server to transmit post-processed medical images of multiple modalities to the multimode image fusion unit, and the multimode image fusion unit fuses the medical images of multiple modalities.

6. The system of claim 5, wherein: the multi-mode image fusion unit fuses medical images of multiple modalities of a patient in a semi-automatic registration mode; and for the images of different modalities, the multi-mode image fusion unit respectively selects clear anatomical landmark points in one-to-one correspondence, and performs registration by adopting a point set-to-point set registration mode.

7. The system of claim 6, wherein: the multi-mode image fusion unit respectively selects clear anatomical landmark points in one-to-one correspondence, and the registration is carried out by adopting a point set-to-point set registration mode, specifically: selecting a reference image from the multiple fused medical images, taking the rest of the fused medical images as images to be registered, and extracting corner features related to anatomical annotation points from the reference image and the images to be registered; secondly, similarity of corner features in the two images is calculated one by one, and corner feature matching is carried out.

8. The system of claim 7, wherein: the user terminal, prior to selecting the reference image, comprises: judging whether the pixels of the images to be fused are consistent, if not, fusing the high-pixel image features and the low-pixel image features corresponding to the images to be segmented and fused after downsampling interactive processing is carried out on the high-pixel image features and the low-pixel image features, and obtaining downsampling interactive features of high pixels and low pixels; carrying out convolution interactive processing on the high-pixel downsampling interactive features and the low-pixel downsampling interactive features and then fusing to obtain high-pixel convolution interactive features and low-pixel convolution interactive features; performing up-sampling interactive processing on the high-pixel convolution interactive feature and the low-pixel convolution interactive feature and then fusing to obtain a high-pixel up-sampling interactive feature and a low-pixel up-sampling interactive feature; and segmenting the target object from the image to be segmented according to the high pixel up-sampling interactive feature and the low pixel up-sampling interactive feature.

9. The system of any of claims 1-8, wherein: the server is a far-end cloud server or a near-end edge server provided by a third party.

10. The system of any of claims 1-8, wherein: the convention algorithm is a hash algorithm or an RSA algorithm.

Background

Mutual trust and mutual recognition of medical test results have become common knowledge, especially in medical institutions for medical treatment in large cities. However, currently, the existing hospital information-based construction is the integration and mutual recognition of various information data limited in the hospital, and the data access information system of the external device is a cautious attitude. Meanwhile, as each data is scattered and messy, each application system is also isolated from each other; there are also many software service vendors, and software system data are independent, so that they are isolated from each other in terms of data, workflow, and the like. Even the data of software systems provided by the same company are not completely shared with each other. Even because of the factors of company operation and maintenance, the indexes of user information and the like of the same company are different according to the historical basic data of hospitals, and the executed indexes and the like are not inconsistent.

In addition, in consideration of the safety of the user and the hospital information, the hospital risks leakage and difficulty in finding work control leakage for information such as patient data and medical records. Therefore, there is an apprehension about sharing information of hospitals, etc., which also causes the patients to visit different hospitals and departments while carrying the same report or repeating a large number of report executions.

Moreover, since the user carries various reports to check in various hospitals, the information such as the label of the previous hospital is not known by the subsequent hospital receiving hospital, and the original image data and the like can not provide real-time reference for the subsequent examination and the like, the unexpected burden of the patient is increased, and the workload of the doctor is not reduced.

Therefore, it is necessary to provide a system capable of improving hospital medical information sharing, completing information transfer between hospitals, and improving safety of patient information. This is particularly important when the patient is performing an imaging examination.

Disclosure of Invention

Therefore, the application provides a medical information sharing system, which comprises a security center and a plurality of user terminals, wherein the security center comprises a security verification module, and the security verification module is used for obtaining a public and private key pair, initializing the system and generating public parameters and a master key of the system; extracting abstract document information from medical record documents of each patient, acquiring topic keywords, selecting ID (identity) information of a user, and generating a unique determined ID number by an agreed algorithm, wherein the ID number indexes the medical record information of the user;

the user terminal is used for requesting medical data of a patient with a corresponding ID number to the server, after the user terminal and a safety verification module of the safety center execute handshake confirmation, the user terminal sends public parameters, a main secret key and user ID information, and after the safety center executes verification on the main secret key and the user ID information, the safety center downloads encrypted original medical data of the patient from the server; and the public parameter is used for verifying and identifying the system and the user terminal. The public parameter is used for verifying and identifying a system and a user terminal, and identifying whether the user terminal has an access right;

the safety verification unit of the safety center judges whether the attribute set of the data requester of the user terminal meets the attribute threshold value in the encrypted medical data access strategy set by the patient or not, and if the attribute set of the data requester meets the attribute threshold value in the access strategy set by the patient, the medical data of the patient is successfully decrypted to obtain the data plaintext; otherwise, the decryption is unsuccessful;

the medical data is an image data document, a history browsing archive of the image document is obtained, and optionally the document is a predetermined region of the patient's body that can be imaged using an MRI, PET scan or PET/CT scan. Scanning the patient may provide one or more images of a predetermined region of the patient's body.

And the user terminal acquires the labeling information of the plurality of image data, selects the image with the highest significance level from the plurality of images and fuses the image with the local detection image data of the user terminal.

Optionally, the significant image in the server is determined according to a diagnosis mode, clinical relevance, a segmentation region, and reading time as parameters, specifically: level of significance ═ a (clinical relevance) + b (segmented region) + c (read time) + d, where a, b and c are coefficients and d is an offset; the saliency level is stored in association with the image. And selecting a fusion image to be recommended according to the significance level.

The user terminal acquires the key image and performs fusion display on the key image and the shot image of the hospital, and a multimode image fusion module in the user terminal is connected with a medical image post-processing unit and is used for fusing the multimode medical image of the patient and performing three-dimensional reconstruction and visualization processing;

optionally, the multi-mode image fusion unit and the medical image post-processing unit in the user terminal are connected to each other and connected to the security center or the third-party server through a network, the medical image post-processing unit transmits post-processed medical images of multiple modalities to the multi-mode image fusion unit, and the multi-mode image fusion unit fuses the medical images of multiple modalities; the multi-mode image fusion unit fuses medical images of multiple modalities of a patient in a semi-automatic registration mode. For medical images of different modalities, the multi-mode image fusion unit respectively selects clear anatomical landmark points in one-to-one correspondence, and performs registration by adopting a registration algorithm from a point set to a point set;

optionally: selecting a reference image from the multiple fusion images, taking the rest as images to be registered, and extracting corner features related to the anatomical annotation points from the reference image and the images to be registered; secondly, calculating the similarity of the corner features in the two images one by one, and performing corner feature matching; then, estimating parameters of the geometric transformation model according to the successfully matched feature point pairs; and finally, performing image resampling and transformation on the image to be registered according to the parameters of the geometric transformation model.

Optionally, before the image is executed to select the reference image, the method further includes determining whether the pixels are consistent, if the pixels are inconsistent, performing down-sampling interaction processing on the high-pixel image feature and the low-pixel image feature corresponding to the image to be segmented and fused, and then fusing the high-pixel down-sampling interaction feature and the low-pixel down-sampling interaction feature; carrying out convolution interactive processing on the high-pixel downsampling interactive features and the low-pixel downsampling interactive features and then fusing to obtain high-pixel convolution interactive features and low-pixel convolution interactive features; performing up-sampling interactive processing on the high-pixel convolution interactive feature and the low-pixel convolution interactive feature and then fusing to obtain a high-pixel up-sampling interactive feature and a low-pixel up-sampling interactive feature; and segmenting the target object from the image to be segmented according to the high pixel up-sampling interactive feature and the low pixel up-sampling interactive feature.

The convention algorithm is a hash algorithm or an RSA algorithm.

Optionally, the performing medical image fusion may comprise the step of 1) geometrically registering the various images. The image registration is a process of space normalization of different images, geometric differences among the images are corrected through a mathematical model, and the two images are synthesized into the same coordinate system, so that the same scenery corresponds to different local images, and subsequent fusion processing is facilitated; 2) adjusting the gray scale of the high-resolution image to make the mean value and the variance equal to those of the low-resolution image; 3) then, the detailed information sensing image is decomposed by utilizing the wavelet, so that approximate information and detail information are obtained; 4) carrying out image enhancement or image compression on the obtained information so as to improve the visual degree of the image; 5) replacing approximate information obtained by wavelet decomposition by using an original clear and detailed local high-definition image map in the same region; 6) and performing wavelet reconstruction by using the replaced high-definition impression image and approximate information obtained by wavelet decomposition to obtain a fused image.

Optionally, the decomposing and fusing by using the wavelet image specifically includes: 1) for the registered source image I1,I2,…InRespectively performing two-dimensional wavelet decomposition, and setting the decomposition layer number as J; 2) adopting a fusion rule of average values for the low-frequency decomposition coefficients, and setting A1,J,A2,J…,An,JFor the image to be fused, the low-frequency component on the wavelet decomposition scale J is the fused low-frequency component

3) For high-frequency decomposition coefficient, removing wavelet coefficient with large absolute value of corresponding position as fusion image I1,I2,…InWavelet coefficients at corresponding positions, i.e.

Dj=max(D1,j,D2,j,…,Dn,j)

Wherein J is more than or equal to 1 and less than or equal to J, Di,j(I ═ 1,2, … n) is the source image IiHigh frequency decomposition coefficients on the j-th layer.

4) And performing wavelet inverse change on the fused wavelet coefficient to obtain a fused image.

Drawings

The features and advantages of the present invention will be more clearly understood by reference to the accompanying drawings, which are illustrative and not to be construed as limiting the invention in any way.

FIG. 1 is a schematic diagram of the system of the present invention.

Detailed Description

These and other features and characteristics of the present invention, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will be better understood upon consideration of the following description and the accompanying drawings, which form a part of this specification. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the invention. It will be understood that the figures are not drawn to scale. Various block diagrams are used in the present invention to illustrate various variations of embodiments according to the present invention.

Example 1

As shown in fig. 1, in the embodiment of the present invention, the system includes a security center and a plurality of user terminals, remote servers. Alternatively, the user terminal may be a computer system, which may be, for example, a standard personal computer with standard CPU, memory and storage, an enhanced picture archiving communications system, or an add-on subsystem of an existing PACS and/or RIS. In an embodiment of the invention, the computer system may be arranged to analyze and prioritize images and patient cases.

The computer system may automatically retrieve medical images from an imaging module (e.g., CAT/CT scanner, MRI, PET/CT scanner) or database or PACS in which medical images are stored, automatically analyze the medical images, and provide the medical images and analysis results for review by a reviewing physician or a referring physician or a specialist such as a radiologist.

The system comprises a security center and a plurality of user terminals, wherein the security center comprises a security verification module which is used for obtaining a public and private key pair, initializing the system and generating public parameters and a master key of the system; extracting abstract document information from medical record documents of each patient, acquiring topic keywords, selecting ID (identity) information of a user, and generating a unique determined ID number by an agreed algorithm, wherein the ID number indexes the medical record information of the user;

the user terminal is used for requesting medical data of a patient with a corresponding ID number to the remote server, the user terminal sends pubulic parameters, a main secret key and user ID information after the user terminal and a safety verification module of the safety center execute handshake confirmation, and the safety center downloads encrypted original medical data of the patient from the server after verifying the main secret key and the user ID information; and the public parameter is used for verifying and identifying the system and the user terminal. The public parameter is used for verifying and identifying the system and the user terminal and identifying whether the user terminal has the access authority.

The safety verification unit of the safety center judges whether the attribute set of the data requester of the user terminal meets the attribute threshold value in the encrypted medical data access strategy set by the patient or not, and if the attribute set of the data requester meets the attribute threshold value in the access strategy set by the patient, the medical data of the patient is successfully decrypted to obtain the data plaintext; otherwise, the decryption is not successful.

Optionally, the attribute threshold, that is, the authorization right value set by the user differently, optionally, different attribute thresholds of the access policy correspond to different data in the access policy, and a high threshold may be considered for the access information of a low threshold. The set of attributes of the requestor corresponds to the job title, medical experience, specialty, department, etc. of the requestor.

The medical data is an image data document, and a history browsing file of the image document is obtained. When browsing the document, the annotation and annotation information of the doctor and the like at the user terminal during browsing are weighted to the corresponding image information as the saliency annotation. For example, the recording and labeling of information such as expert consultation, etc. offsets the labeling weight of significance, i.e., affects the d-value assignment.

Optionally, when the picture photo of the expert consultation or the significant photo determined by the doctor has a higher priority, and the like, so that the image photo of the corresponding number or only the annotation and the annotation information can be timely selected to be transmitted according to the transmission state of the access link during subsequent data access. Therefore, the safety of the access information is improved, and the safety center signs the information when only the endorsement and the annotation information are transmitted, so that the reliability of the information is ensured.

Optionally, the image document in the document is an image of a predetermined region of the patient's body using an MRI, PET scan or PET/CT scan. Scanning a patient to provide one or more images of a predetermined region of the patient's body; and the local user terminal obtains the marking information of the plurality of image data, selects the image with the highest significance level from the plurality of images and fuses the image with the local detection image data of the user terminal.

Illustratively, when detecting blood in pleural effusion (hemothorax), the area where the detected blood enters the pleural effusion can be highlighted, the patient can be automatically guided to the position for verification according to standard information of a server in the safety center, so that the radiologist can diagnose the hemothorax without further measuring the liquid intensity in other slices, parameters of local features can be obtained through image fusion, and the parameters are provided for subsequent diagnosticians based on the data fusion of the detected images, thereby improving the auditing efficiency.

The image knowledge of saliency is: may be weighted by setting one or more parameters to influence the determination of the level of significance. In practice, the above parameter may be set to d based on the higher priority such as expert consultation, and an offset may be added to the determination of the importance level of the image. The optional one or more parameters include, for example, clinical relevance, diagnostic mode, segmentation region, number of pixels, and/or image read time. A level of significance is determined for each of a plurality of images that are related to each other. In certain embodiments, the significance level is stored in association with the image. Preferably, the level of significance is determined using the equation: significance level ═ a (clinical relevance) + b (segmented region) + c (reading time) + d, where a, b and c are weighting coefficients, and d is an offset.

It is particularly noted that the images taken locally, the original images the doctor is browsing, etc., are also acquired according to the above parameters and are synchronously transmitted to the server of the security center.

Optionally, the degree of importance is determined automatically when the doctor marks the image as important. Preferably, the plurality of images may be viewed in an order determined based on the importance level. The image arrangement may be stored in a server based on the importance level, and the access to the corresponding image information is performed by the access policy attributes of the patient, thereby performing the subsequent image fusion.

The key image and the shot image of the hospital are fused and displayed, the user terminal acquires the image to be fused, the key image and the shot image of the hospital are fused and displayed, and a multimode image fusion module in the user terminal is connected with a medical image post-processing unit and used for fusing the multimode image of the patient and performing three-dimensional reconstruction and visualization processing; medical images of multiple modalities of a patient are fused in a semi-automatic registration mode. For medical images of different modalities, the multi-mode image fusion unit respectively selects clear anatomical landmark points in one-to-one correspondence, and performs registration in a point set-to-point set registration mode;

selecting a reference medical image from the multiple fused medical images, taking the rest of the fused medical images as medical images to be registered, and extracting corner features related to anatomical annotation points from the reference image and the medical images to be registered; secondly, similarity of corner features in the two images is calculated one by one, and corner feature matching is carried out.

Optionally, parameters of the geometric transformation model are estimated according to the successfully matched feature point pairs; and finally, performing image resampling and transformation on the image to be registered according to the parameters of the geometric transformation model. And (3) a geometric transformation model, wherein nonlinear transformation, affine transformation and the like can be selected.

Optionally, before the image is executed to select the reference image, the method further includes determining whether the pixels are consistent, if the pixels are inconsistent, performing down-sampling interaction processing on the high-pixel image feature and the low-pixel image feature corresponding to the image to be segmented, and then fusing the high-pixel and low-pixel down-sampling interaction features; carrying out convolution interactive processing on the high-pixel downsampling interactive features and the low-pixel downsampling interactive features and then fusing to obtain high-pixel convolution interactive features and low-pixel convolution interactive features; performing up-sampling interactive processing on the high-pixel convolution interactive feature and the low-pixel convolution interactive feature and then fusing to obtain a high-pixel up-sampling interactive feature and a low-pixel up-sampling interactive feature; and segmenting the target object from the image to be segmented according to the high pixel up-sampling interactive feature and the low pixel up-sampling interactive feature.

Optionally, the image fusion step comprises the steps of 1) geometrically registering the various images. The image registration is a process of space normalization of different images, geometric differences among the images are corrected through a mathematical model, and the two images are synthesized into the same coordinate system, so that the same scenery corresponds to different local images, and subsequent fusion processing is facilitated; 2) adjusting the gray scale of the high-resolution image to make the mean value and the variance equal to those of the low-resolution image; 3) then, the detailed information sensing image is decomposed by utilizing the wavelet, so that approximate information and detail information are obtained; 4) carrying out image enhancement or image compression on the obtained information so as to improve the visual degree of the image; 5) replacing approximate information obtained by wavelet decomposition by using the same local high-definition image map which is clear in original text and detailed in information; 6) and carrying out wavelet reconstruction by using the replaced high-definition image map and approximate information obtained by wavelet decomposition to obtain a fused image.

Optionally, the decomposing and fusing by using the wavelet image specifically includes: 1) for the registered source image I1,I2,…InRespectively performing two-dimensional wavelet decomposition, and setting the decomposition layer number as J; 2) adopting a fusion rule of average values for the low-frequency decomposition coefficients, and setting A1,J,A2,J…,An,JFor the image to be fused, the low-frequency component on the wavelet decomposition scale J is the fused low-frequency component

3) For high-frequency decomposition coefficient, removing wavelet coefficient with large absolute value of corresponding position as fusion image I1,I2,…InWavelet coefficients at corresponding positions, i.e.

Dj=max(D1,j,D2,j,…,Dn,j)

Wherein J is more than or equal to 1 and less than or equal to J, Di,j(I ═ 1,2, … n) is the source image IiHigh frequency decomposition coefficients on the j-th layer.

4) And performing wavelet inverse change on the fused wavelet coefficient to obtain a fused image.

Preferably, taking a lung image as an example, the synthesizing and displaying the image of the key region may be that overlapping identification of the key region is performed, whether each pixel of the MASK region is greater than or equal to a predetermined threshold is determined, if so, the pixel is determined to belong to the lung region, and if not, the pixel is determined to belong to the non-key region. For example, assuming that the predetermined threshold is 0.5, determining whether each pixel of the MASK region is greater than or equal to 0.5 through the sigmod function, and if the pixel is greater than or equal to 05, the pixel is the lung region; if less than 0.5, the pixel is a non-critical area.

Optionally, a fused lung feature map of the lung image may be obtained based on the fusion result; and inputting the fused lung feature map into a classification network, and determining the pneumoconiosis grade of each lung region of the lung image through the classification network.

Preferably, in practice, for example, the medical image contains a large number of backgrounds with single gray levels, the distance between adjacent gray levels is increased, the output image may have degradation phenomena such as false contours, and the like, the fusion module at the user's local end is further configured to define a rectangular sub-region and a moving step, move the sub-region according to the step, sequentially traverse the entire image, equalize all pixels in the corresponding sub-region during the period, perform equalization for a plurality of times on each pixel of the original image, and finally, use the equalized average value as the gray value of the corresponding pixel of the output image to enhance the image.

Example 2

It will be understood by those skilled in the art that all or part of the processes including the method steps in the above embodiment 1 can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the above embodiments of the methods when executed. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.

As used in this application, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being: a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of example, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).

It should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种医用耗材数据管理平台

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!