Image processing apparatus, image processing method, and computer readable medium

文档序号:9423 发布日期:2021-09-17 浏览:56次 中文

1. An image processing apparatus comprising a processor,

the processor performs the following processing:

an image having a foreground and a background is accepted,

an object to be inserted is accepted,

calculating a position at which the inserted object is configured according to respective features of the foreground, the background, and the inserted object,

and outputting the insertion object so that the insertion object is arranged at the calculated position.

2. The image processing apparatus according to claim 1,

the processor finds the center of gravity position of the foreground, and finds the center of gravity position of the inserted object from the center of gravity position of the foreground.

3. The image processing apparatus according to claim 2,

the processor obtains the barycentric position of the insertion object such that the barycentric position of the insertion object is symmetrical with the barycentric position of the foreground with the center of the image as the center.

4. The image processing apparatus according to claim 1,

the processor finds the center of gravity position of the foreground, and calculates the center of gravity position of the inserted object to belong to the background.

5. The image processing apparatus according to any one of claims 2 to 4,

the processor finds the center of gravity position of the foreground taking into account color differences of the foreground relative to the background.

6. The image processing apparatus according to any one of claims 2 to 5,

the processor finds the barycentric position of the insertion object taking into account a color difference of the insertion object with respect to the background.

7. The image processing apparatus according to any one of claims 1 to 6,

the processor calculates a location at which to configure the inserted object taking into account data relating to the type of foreground.

8. The image processing apparatus according to any one of claims 1 to 7,

the processor calculates a location at which to configure the inserted object taking into account data relating to the type of the inserted object.

9. The image processing apparatus according to any one of claims 1 to 8,

in a case where the insertion object exceeds the image, the processor moves the insertion object in such a manner that the insertion object is accommodated within the image.

10. The image processing apparatus according to any one of claims 1 to 8,

in a case where the insertion object overlaps with the foreground, the processor moves the insertion object in such a manner that the insertion object is received within the background.

11. The image processing apparatus according to any one of claims 1 to 10,

the processor receives a 1 st insertion object and a 2 nd insertion object, calculates a position where the 1 st insertion object is arranged from respective features of the foreground, the background, and the 1 st insertion object, and calculates a position where the 2 nd insertion object is arranged by adding the 1 st insertion object to the foreground.

12. A computer-readable medium in which a program for causing a computer to execute a process is stored,

the process has the following steps:

accepting an image having a foreground and a background;

accepting an insertion object;

calculating a position for configuring the inserted object according to the respective features of the foreground, the background and the inserted object; and

and outputting the insertion object so that the insertion object is arranged at the calculated position.

13. An image processing method having the steps of:

accepting an image having a foreground and a background;

accepting an insertion object;

calculating a position for configuring the inserted object according to the respective features of the foreground, the background and the inserted object; and

and outputting the insertion object so that the insertion object is arranged at the calculated position.

Background

Japanese patent No. 5302258 discloses a method for locating a target object in an electronic document, the method comprising the steps of: identifying, for an input electronic document, a 1 st target object and a 2 nd target object that exist within a page of the electronic document and are to be located; detecting a saliency for the 1 st target object, i.e. generating a saliency map of the 1 st target object; in the step of generating a 1 st-dimensional guideline profile for the 1 st target object based on the detection of saliency for the 1 st target object, the guideline profile is characterized based on a one-dimensional average of saliency in the saliency map; generating a 2 nd one-dimensional guideline profile for the 2 nd target object based on the detection of saliency for the 2 nd target object; locating the 1 st target object and the 2 nd target object based on the 1 st guideline profile and the 2 nd guideline profile and generating a revised document; and outputting the corrected document.

Japanese patent No. 6023058 discloses an image processing apparatus having: a dividing unit that divides the inside of the image into a plurality of segments for each of the plurality of images; a calculation unit that calculates the importance of each of the segments in one image based on the relationship between different segments in the one image or the relationship between the segment in the one image and the segment in a predetermined other image; and a classification unit that classifies the segmented segment into any one of a target object, a foreground, and a background, wherein the calculation unit calculates the importance of the segment using at least one of the attention of the segment, the co-occurrence of the segment, and the importance of the target object, calculates the importance of the foreground segment and the background segment so that the higher the attention of the segment is closer to a segment estimated to be a position of interest to which a photographer focuses on in the one image, calculates the importance of the foreground segment and the background segment based on the calculated attention of the segment, calculates the center of gravity of the target object segment in the one image, and calculates a point-symmetric position having the center point of the image as the center of gravity as the position of interest.

Japanese patent No. 6422228 discloses a program for causing a computer to function as a reception means for displaying an input screen for inputting a text and causing a text object representing the text input on the input screen to be an object to be newly inserted into the page, and a display means for displaying a page in which a target object is arranged on a display, displaying an input screen for inputting a text in response to an instruction from a user for a display item displayed on the display other than the page, displaying the page with the text object received by the reception means inserted therein, displaying a plurality of target objects on the page in an overlapping manner, and displaying the page in a case where the 2 nd text object is already arranged at a predetermined position on the page when the reception means receives the first text object, in the page, the 1 st text object is inserted into a position not overlapping with the 2 nd text object based on the position where the 2 nd text object is arranged.

Disclosure of Invention

However, it is sometimes necessary to insert an insertion object in the background of an image having a foreground and a background.

An object of the present invention is to provide an image processing apparatus, an image processing method, and a computer-readable medium capable of configuring an insertion object in a manner of maintaining balance in design when the insertion object is inserted into a background of an image having a foreground and a background.

According to a first aspect of the present disclosure, there is provided an image processing apparatus including a processor that performs: an image having a foreground and a background is received, an insertion object is received, a position where the insertion object is disposed is calculated based on characteristics of each of the foreground, the background, and the insertion object is output so as to be disposed at the calculated position.

According to a second aspect of the present disclosure, the processor finds the center of gravity position of the foreground, and finds the center of gravity position of the insertion object from the center of gravity position of the foreground.

According to a third aspect of the present disclosure, the processor obtains the barycentric position of the insertion object such that the barycentric position of the insertion object is symmetrical with the barycentric position of the foreground with the center of the image as the center.

According to a fourth aspect of the present disclosure, the processor finds the barycentric position of the foreground, and calculates such that the barycentric position of the inserted object belongs to the background.

According to a fifth aspect of the present disclosure, the processor finds the center-of-gravity position of the foreground taking into account a color difference of the foreground with respect to the background.

According to a sixth aspect of the present disclosure, the processor finds the barycentric position of the insertion object taking into account a color difference of the insertion object with respect to the background.

According to a seventh aspect of the present disclosure, the processor calculates a position to configure the insertion object taking into account data relating to the type of the foreground.

According to an eighth aspect of the present disclosure, the processor calculates a position where the insertion object is configured, taking into account data related to a type of the insertion object.

According to a ninth aspect of the present disclosure, in a case where the insertion object exceeds the image, the processor moves the insertion object in such a manner that the insertion object is received within the image.

According to a tenth aspect of the present disclosure, in a case where the insertion object overlaps with the foreground, the processor moves the insertion object in such a manner that the insertion object is received within the background.

According to an eleventh aspect of the present disclosure, the processor accepts a 1 st insertion object and a 2 nd insertion object, calculates a position where the 1 st insertion object is arranged from respective features of the foreground, the background, and the 1 st insertion object, and calculates a position where the 2 nd insertion object is arranged by adding the 1 st insertion object as the foreground.

According to a twelfth aspect of the present invention, there is provided a computer-readable medium in which a program for causing a computer to execute a process is stored, the process having the steps of: accepting an image having a foreground and a background; accepting an insertion object; calculating a position for configuring the inserted object according to the respective features of the foreground, the background and the inserted object; and outputting the insertion object so that the insertion object is arranged at the calculated position.

According to a thirteenth aspect of the present disclosure, there is provided an image processing method having the steps of: accepting an image having a foreground and a background; accepting an insertion object; calculating a position for configuring the inserted object according to the respective features of the foreground, the background and the inserted object; and outputting the insertion object so that the insertion object is arranged at the calculated position.

(Effect)

According to the first, twelfth, or thirteenth aspect, when an insertion object is inserted into the background of an image having a foreground and a background, the insertion object can be configured in such a manner as to maintain balance in design.

According to the second aspect, the barycentric position of the inserted object can be calculated from the barycenter of the foreground.

According to the third aspect, the center of gravity of the insertion object can be arranged at a position symmetrical to the center of gravity position of the foreground centered on the center of the image.

According to the fourth aspect, the insertion object can be arranged in the background image.

According to the fifth aspect, the center-of-gravity position of the foreground can be obtained on the condition of the color difference of the foreground with respect to the background.

According to the sixth aspect, the barycentric position of the insertion object can be obtained on the condition that the color difference of the insertion object with respect to the background is a condition.

According to the seventh aspect, the insertion object can be configured with the type of foreground as a condition.

According to the eighth aspect, the insertion object can be configured with the type of the insertion object as a condition.

According to the ninth aspect, even when the insertion object exceeds the background, the insertion object can be corrected to be included in the background according to the calculation result.

According to the tenth aspect, even when the insertion object overlaps the foreground, the insertion object can be corrected to be included in the background according to the calculation result.

According to the eleventh aspect, even when there are a plurality of insertion objects, the plurality of insertion objects can be arranged so as to maintain a balance in design.

Drawings

Fig. 1 is a block diagram showing hardware of an image processing apparatus according to an embodiment of the present disclosure.

Fig. 2 is a block diagram showing a software configuration of an image processing apparatus according to an embodiment of the present disclosure.

Fig. 3 is a flowchart showing a flow of processing for constructing a saliency map in the image processing apparatus according to the embodiment of the present disclosure.

Fig. 4 is a variation diagram showing variations in images for constructing a saliency map in the image processing apparatus according to the embodiment of the present disclosure.

Fig. 5 is a table showing importance coefficients for image types in the image processing apparatus according to the embodiment of the present disclosure.

Fig. 6 is a flowchart showing an operation flow of the image processing apparatus according to the embodiment of the present disclosure.

Fig. 7 is an explanatory diagram illustrating a method of clipping a foreground image and a background image in the image processing apparatus according to the embodiment of the present disclosure.

Fig. 8 is an explanatory diagram illustrating a method for searching a nearby arrangement space in the image processing apparatus according to the embodiment of the present disclosure.

Fig. 9 is an explanatory diagram for explaining processing in embodiment 1 of the present disclosure.

Fig. 10 is an explanatory diagram for explaining a method of determining the barycentric position of an insertion object in embodiment 1 of the present disclosure.

Fig. 11 is an explanatory diagram for explaining processing in embodiment 2 of the present disclosure.

Fig. 12 is an explanatory diagram for explaining a method of determining the barycentric position of an insertion object in embodiment 2 of the present disclosure.

Fig. 13 is an explanatory diagram of processing used in embodiment 3 of the present disclosure.

Fig. 14 is an explanatory diagram for explaining a method of determining the barycentric position of an insertion object in embodiment 3 of the present disclosure.

Fig. 15 is an explanatory diagram for explaining processing in embodiment 4 of the present disclosure.

Fig. 16 is an explanatory diagram for explaining a method of determining the position of the center of gravity of an illustration in embodiment 4 of the present disclosure.

Fig. 17 is an explanatory diagram for explaining a method of determining the position of the center of gravity of a text in embodiment 4 of the present disclosure.

Fig. 18 is an explanatory diagram for explaining screen change in the case of changing the size and color of a text in example 5 of the present disclosure, in which fig. 18 (a) shows a screen before the change, and fig. 18 (b) shows a screen after the change.

Detailed Description

Next, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.

Fig. 1 shows a hardware configuration of an image processing apparatus 10 of an embodiment of the present disclosure.

The image processing apparatus 10 includes a processor 12, a memory 14, a storage device 16, an operation display device interface 18, a communication interface 20, and an input interface 22, and the processor 12, the memory 14, the storage device 16, the operation display device interface 18, the communication interface 20, and the input interface 22 are connected via a bus 24.

The processor 12 executes predetermined processing based on a control program stored in the memory 14. The storage device 16 is constituted by, for example, a hard disk, and stores necessary software and data. An operation display device 26 is connected to the operation display device interface 18. The operation display device 26 includes a touch panel 28 and a display 30, receives operation data from the touch panel 28, and transmits display data to the display 30.

The communication interface 20 is connected to a terminal device or a server via a LAN (local area network) 32, and receives an image from the terminal device or the server or transmits an image to the terminal device or the server. Not limited to the LAN, the terminal device or the server may be connected via the internet.

The input interface 22 is connected to a mouse 34 and a keyboard 36, and inputs operation signals or operation data from the mouse 34 and the keyboard 36.

Fig. 2 shows a software configuration for realizing the functions of the image processing apparatus 10.

The image processing apparatus 10 includes an insertion target image receiving unit 38 and an insertion target receiving unit 40.

The image receiving unit 38 receives an image having a foreground and a background. The insertion object receiving unit 40 receives an image to be inserted. The image receiving unit 38 receives the image stored in the storage device 16, the image transmitted from the terminal device and the server via the communication interface 20, and the like. Similarly, the insertion object receiving unit 40 receives an image formed as a text in addition to the image stored in the storage device 16 and the image transmitted from the terminal device and the server via the communication interface 20. The received image includes text, logo, illustration, and image.

As described later, the foreground-background clipping unit 42 clips the foreground and background of the image received by the insertion target image receiving unit 38 based on the saliency map of the background image.

As described later, the foreground image center of gravity calculation section 44 calculates the center of gravity position of the foreground using moments based on the characteristics of the color difference between the foreground image and the background image, the color difference between the insertion object and the background image, the visual importance degree relating to the type of the foreground, and the like.

As described below, the insertion target barycentric calculation unit 46 calculates the barycentric position of the insertion target arrangement space so as to be symmetrical with the barycentric position of the foreground image from the image center.

As will be described later, the insertion object arrangement determination unit 48 determines whether or not the insertion object calculated by the insertion object center of gravity calculation unit 46 is contained in the background image. For example, it is determined whether the insertion object does not exceed the inserted object image or does not overlap the foreground image.

As described below, when the insertion object arrangement determination unit 48 determines that the insertion object is not accommodated in the background image, the insertion object arrangement change unit 50 changes the arrangement of the insertion object so that the insertion object is accommodated in the background image.

The result display control unit 52 controls the display to display a state in which the insertion object is inserted into the background image.

As described later, the importance database 54 is a database in which visual importance of images preset for each type of image is stored.

Next, the construction of the saliency map of the background image performed by the foreground background cutout unit 42 will be described.

Fig. 3 shows a flowchart for constructing a saliency map, and fig. 4 shows a change of an image when the flow is executed. When the insertion object image (original image) shown in (a) of fig. 4 is input, first, in step S10, the original image is color-reduced by Kmeans, which is one of the non-hierarchical clustering analysis methods. Kmeans, also known as K-means, is one of the non-hierarchical cluster analysis methods. When the original image is color-reduced by Kmeans, a color-reduced image is generated as shown in fig. 4 (b).

In the next step S12, a histogram indicating the frequency of each pixel value of the original image after color reduction is extracted. In the next step S14, the pixel values of each channel of R, G, B corresponding to the histogram extracted in step S12 and the content indicating the transmitted pixel value called alpha channel are extracted.

In the next step S16, the content indicated by the pixel value brighter than the predetermined threshold value is extracted as the saliency map. If extracted as a saliency map, the foreground and the background can be distinguished as shown in fig. 4 (c). Further, in step S18, when the blurring process is performed, as shown in fig. 4 (d), the salient regions can be concentrated, and thus the foreground and the background can be cut out.

Next, a method of calculating the center of gravity of the foreground image calculated by the foreground image center of gravity calculating unit 44 will be described.

The formula (1) is a general formula for calculating the position of the center of gravity physically in the XY orthogonal coordinate system.

The barycentric position is found by dividing the 1 st moment by the 0 th moment.

Here, when the center of gravity position is determined, if the elements of the color difference and the importance are added, equation (2) is obtained.

f(x,y)=αx,yw(x,y)

w (x, y) is the color difference between the foreground and the background, αx,yIs a coefficient representing the importance degree set in advance according to the type of the foreground image.

W (x, y) is calculated from equation (3).

Herein, R isx,y、Gx,y、Bx,yIs the RGB value, R, of each pixel of a foreground image in RGB color spacebg、Gbg、BbgIs an average value of RGB of the background in the RGB color space (hereinafter referred to as a main color).

In addition, as the color space, not limited to RGB, HSV or Lab may be used.

When the foreground image includes the logo and the text as the insertion target, the calculation is performed as in equation (4).

Wherein center isimgIs the center of the original image, αimgIs the importance of the foreground image, αtextIs the importance of the text, αlogoIs the importance of the identity, WimgIs the color difference of the image, wtextIs the color difference of the text, WlogoIs the color difference of the logo.

Further, the importance of each type of image stored in the importance database 54 is shown in fig. 5. As a result of being summarized by existing works or professional designers, for example, since a text is easily noticed and set to 1.0, and a person is more easily noticed than a building, the building is set to 0.05, and a person is set to 0.1.

Next, the flow of operations executed by the processor 12 will be described.

A flow chart representing the flow of operation of processor 12 is shown in fig. 6.

First, in step S20, a saliency map of the inserted object image is extracted. In the next step S22, the salient region is determined and the foreground and background are clipped. The clipped foreground and background, as shown in fig. 7, are temporarily stored by dividing the layers into a foreground image layer 56 and a background image layer 58.

In the next step S24, the barycentric position of the foreground image is calculated using the above-described equations (1) and (2) based on the color difference between the foreground and the background. In the next step S26, the foreground and the type of the insertion object are identified, and the importance coefficient is read in from the importance database 54.

In the next step S28, the barycentric position of the insertion object is calculated using a position symmetrical to the barycentric position of the foreground with the center of the image as the center of gravity of the insertion object.

In the next step S30 and step S32, it is determined whether the insertion object overlaps with the foreground image or whether the insertion object goes beyond the background image. If it is determined in steps S30 and S32 that the insertion object does not overlap the foreground image and does not exceed the background image, the process proceeds to step S34, and the image with the insertion object inserted therein is displayed to end the process. That is, as shown in fig. 8, when the background is a vacant space in which the insertion object can be arranged and the insertion object is accommodated in the background, the image in which the insertion object is inserted is displayed and the process is terminated.

On the other hand, if it is determined in step S30 that the insertion object overlaps the foreground image, the process proceeds to step S36. In step S36, the insertion object is moved to the empty space based on the line of symmetry of the center of gravity (line connecting the center of gravity of the foreground image and the center of gravity of the insertion object with the center of the image in between). In addition, in step S32, if it is determined that the insertion object is out of the background, the process proceeds to step S38. In step S38, the inserted object is moved into the background based on the line of symmetry of the center of gravity.

In the next step S40, it is determined whether or not there is a free space in which the insertion object is to be placed. If it is determined in step S40 that there is a free space in which the insertion object is to be placed, the process proceeds to step S34, where the image in which the insertion object is inserted is displayed, and the process ends.

If it is determined in step S40 that there is no empty space, the process proceeds to step S42, where a second stage of search for an empty space is performed. That is, as shown in fig. 8, the center of gravity position of the insertion object is gradually moved to the periphery, and whether or not the insertion object is accommodated in the background image is determined. In this embodiment, assuming that the center of gravity position of the insertion object calculated first is X-0 and Y-0, the center of gravity position of the insertion object is moved to the periphery so that X-1, Y-0 → X-0, Y-1 → X-1, Y-0 → X-1, Y-1 → X-2, and Y-0, for example, to search for a free space of the insertion object. In step S42, if there is a free space, the process proceeds to step S34, where the image with the inserted object inserted is displayed, and the process ends. If no empty space is found even if the process of step S42 is executed, an alarm display is performed and the process ends.

Next, embodiments of the present disclosure are explained.

Fig. 9 and 10 show example 1.

In embodiment 1, as shown in fig. 9, an insertion object made of a text such as "overseas travel" is inserted into a background image of an original image of 302 × 360 pixels. The size of the text box is set and text is entered. The text box is a box for inputting text, and the direction of the text box is a portrait.

The foreground image is identified as the dominant color (R) of the building, background imagebg、Gbg、Bbg) As follows.

Dominant color (R) of background imagebg、Gbg、Bbg)=(181,195,206)

The center of gravity of the foreground image is determined from the above equations (1), (2), and (3) by using the color difference w (x, y) between the RGB data of each pixel (x, y) of the foreground image and the RGB data of the main color of the background image.

As a result, the center of gravity position (x) of the foreground imagefg,yfg) As follows.

Center of gravity position (x) of foreground imagefg,yfg)=(228,237)

In addition, the center (x) of the original imageimgC,yimgC)=(151,180)。

The importance of the building and text is as follows.

Importance of building alphafg=0.05

Importance of text alphatxt=1.0

When an insertion object is input as an image, the insertion object is converted into a text and the importance is set.

As shown in fig. 10, the barycentric position (txt) of the insertion object as the text is calculated so that the moments of the foreground image and the insertion object are balanced by equations (5) and (6) so that the center (imgC) of the image is symmetric with respect to the barycentric position (fg) of the foreground image.

As a result, as shown in fig. 10, the barycentric position (x) of the insertion subjecttxt,ytxt) As follows.

Center of gravity position (x) of the inserted objecttxt,ytxt)=(59,112)

Fig. 11 and 12 show example 2.

In embodiment 2, as shown in fig. 11, an insertion object composed of a "kyoto walk" text is inserted in a background image of an original image of 360 × 310 pixels. The orientation of the text box is portrait.

The foreground image is identified as the dominant color (R) of the building, background imagebg、Gbg、Bbg) As follows.

Dominant color (R) of background imagebg、Gbg、Bbg)=(167,203,204)

The barycentric position of the foreground image is calculated as in example 1, as follows.

Center of gravity position (x) of foreground imagefg,yfg)=(215,192)

In addition, the center (x) of the original imageimgC,yimgC)=(180,155)。

The importance of the building and text is as follows.

Importance of building alphafg=0.05

Importance of text alphatxt=1.0

The barycentric position of the inserted object is calculated as in example 1, as follows.

Center of gravity position (x) of the inserted objecttxt,ytxt)=(141,41)

As a result thereof, the insertion object goes beyond the original image as shown in fig. 12.

Therefore, as shown in step S38, the insertion object is moved in accordance with the center line of gravity, and the insertion object is stored in the background image at the next position.

Center of gravity position (x) of the moved insertion objecttxt,ytxt)=(145,51)

Fig. 13 and 14 show example 3.

In this embodiment 3, as shown in fig. 13, an insertion object composed of a text of "natural protection association" is inserted in a background image of an original image of 360 × 239 pixels. The orientation of the text box is landscape.

The foreground image is identified as an animal, the dominant color (R) of the background imagebg、Gbg、Bbg) As follows.

Dominant color of background image(Rbg、Gbg、Bbg)=(184,199,216)

The barycentric position of the foreground image is calculated as in example 1, as follows.

Center of gravity position (x) of foreground imagefg,yfg)=(221,180)

In addition, the center (x) of the original imageimgC,yimgC)=(180,119)。

The importance of the building and text is as follows.

Importance of animals alphafg=0.1

Importance of text alphatxt=1.0

The barycentric position of the inserted object is calculated as in example 1, as follows.

Center of gravity position (x) of the inserted objecttxt,ytxt)=(131,47)

As a result, the insertion object overlaps the foreground image as shown in fig. 14.

Therefore, as shown in step S36, the insertion object is moved according to the center line of gravity, and the insertion object does not overlap the foreground image at the next position.

Center of gravity position (x) of the moved insertion objecttxt,ytxt)=(34,21)

Fig. 15, 16 and 17 show embodiment 4.

In this embodiment 4, as shown in fig. 15, an insertion object composed of an illustration and a text of "child photo studio" is inserted in a background image of an original image of 360 × 240 pixels. The orientation of the text box is portrait.

The foreground image is recognized as a person, the dominant color (R) of the background imagebg、Gbg、Bbg) As follows.

Dominant color (R) of background imagebg、Gbg、Bbg)=(200,198,207)

The barycentric position of the foreground image is calculated as in example 1, as follows.

Center of gravity position (x) of foreground imagefg,yfg)=(166,142)

In addition, the center (x) of the original imageimgC,yimgC)=(180,120)。

The importance coefficients of people, illustrations and texts are extracted, and the illustrations are inserted into the background image.

The arrangement space of the inset overlaps the foreground image, and the center of gravity of the inset is moved to a nearby vacant space as shown in fig. 16. As a result, the barycentric position of the inset is as follows.

Center of gravity position (x) of the illustrationillus,yillus)=(238,90)

Next, the text is inserted into the background image, and the text is processed as an image to be inserted, including the illustration. That is, the interpolation is included in the foreground image, and the barycentric position of the foreground image is recalculated.

As a result of the recalculation, the new foreground barycentric location is as follows.

New foreground center of gravity position (x)fg,yfg)=(185,110)

When the arrangement center of gravity position of the text is calculated with the new foreground center of gravity position as a reference, as shown in fig. 17, the arrangement space of the text overlaps the foreground image, and the center of gravity position of the text is moved to a nearby vacant space. As a result, the text is arranged in the background image as described below.

Center of gravity position (x) of texttxt,ytxt)=(74,110)

FIG. 18 shows example 5.

Embodiment 5 is an example of a case where the size (width and height) of a text is changed and the color of the text is changed with the text as an insertion target.

That is, fig. 18 shows a screen displayed on the display 30 described above. In this screen, an operation instruction unit 60 is disposed on the left side of the screen, and a result display unit 62 is disposed on the right side. The operation instructing section 60 has a text size specifying section 64 and a text color specifying section 66. The text size specifying section 64 specifies the size of the text. The text color specifying section 66 specifies the color of the text. The text color specification section 66 can specify the color used in the image of the result display section 62. By setting the color used for the image to the color of the text, the sense of incongruity is reduced as compared with the case where the color of the text is black. However, the color of the text may also specify white or black.

Currently, if the text size is increased by the text size specification section 64 and the text color is made to be dark by the text color specification section 66, the display of the result display section 62 changes from the state of fig. 18 (a) to the state of fig. 18 (b).

Here, since the size of the text increases, the color difference between the text and the background image becomes large, and thus the weight of the text increases and the text moves toward the center of the screen.

In the above embodiments, the processor refers to a broad processor, including a general-purpose processor (e.g., CPU), a special-purpose processor (e.g., GPU: Graphics Processing Unit, ASIC: Application Specific Integrated Circuit, FPGA: Field Programmable Gate Array, Programmable logic device, etc.).

Note that the operation of the processor in the above-described embodiment may be realized not only by one processor but also by cooperation of a plurality of processors that exist at physically separate locations. The order of operations of the processor is not limited to the order described in the above embodiments, and may be changed as appropriate.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:自动绘制铁路信号机箱盒配线图的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!