Method and device for generating hair model, storage medium and electronic equipment
1. A method of generating a hair model, the method comprising:
acquiring a preset number of hair patches;
shifting the top point of the hair patch along the normal direction of the hair patch, performing transparency processing on the hair patch obtained after shifting, and superposing a plurality of hair patches obtained after transparency processing to generate an initial hair model;
controlling the growth direction of the initial hair model to extend along the normal direction corresponding to the normal map according to the normal map of the initial hair model;
rendering the initial hair model based on the diffuse reflection map of the initial hair model to generate a target hair model.
2. The method of claim 1, wherein said obtaining a preset number of hair patches comprises:
obtaining a noise map, wherein the noise map comprises a plurality of noise points which are used for representing the distribution area of the hair;
and copying the noise maps until a preset copying number is reached to obtain the hair patches with the preset number.
3. The method according to claim 1, wherein the shifting the vertex of the hair patch in the normal direction of the hair patch and performing transparency processing on the shifted hair patch includes:
shifting the top point of the hair patch along the normal direction of the hair patch according to a preset offset value until reaching a preset shifting frequency;
for each hair patch obtained after the shift, carrying out attenuation processing on the opacity of the hair patch along the preset direction of the hair patch according to the shift sequence of the hair patch;
the attenuation amount of the opacity of the previous hair patch is smaller than that of the opacity of the next hair patch, and the preset direction is the direction outward along the radial direction of the hair patch.
4. The method of claim 3, wherein, when attenuating the opacity of the hair patch in the predetermined direction of the hair patch in the shifted order of the hair patch, the method further comprises:
determining an initial attenuation amount of the opacity of the hair patch according to the offset sequence of the hair patch, wherein the initial attenuation amount of the opacity of a previous hair patch is smaller than the initial attenuation amount of the opacity of a next hair patch;
and performing attenuation processing on the opacity of the hair patch based on the initial attenuation amount.
5. The method of claim 3, further comprising:
the transparency of the resulting hair patch after each shift is calculated by the following formula:
Alpha=(Noise*2-(Fur_offset*Fur_offset+(Fur_offset*Furmask*5)))*Timing+FurOpacity
the method comprises the steps of obtaining a hair patch, obtaining a color value of the hair patch, obtaining a layered control quantity of the hair patch, obtaining a color value of the Noise patch, obtaining the color value of the color patch, obtaining the color value of the color.
6. The method of claim 1, wherein when the vertex of the hair patch is offset along a normal direction of the hair patch, the method further comprises:
determining the deviation value of the hair patch obtained after the deviation in a preset external force direction according to the deviation sequence of the hair patch obtained after the deviation, wherein the preset external force direction comprises a preset gravity direction and/or a preset wind direction;
and controlling the hair patch obtained after the deviation to deviate the deviation value along the preset external force direction.
7. The method according to claim 1, wherein said controlling the growth direction of the initial hair model to extend along a normal direction corresponding to the normal map of the initial hair model according to the normal map of the initial hair model comprises:
acquiring a vertex normal of the initial hair model;
and replacing the vertex normal with a normal map of the initial hair model to control the growth direction of the initial hair model to extend along the normal direction corresponding to the normal map.
8. The method of claim 1, wherein rendering the initial hair model based on the diffuse reflection map of the initial hair model to generate a target hair model comprises:
setting a base color of the initial hair model by the diffuse reflection map;
and calculating an adjusting color of the initial hair model according to the basic color of the initial hair model, and rendering the initial hair model according to the adjusting color to generate the target hair model.
9. The method according to claim 8, wherein said calculating an adjusted color of the initial hair model from a base color of the initial hair model comprises:
calculating an adjusted color of the initial hair model by:
Cpara=Cbase*CT
C=Cpara*Csun+Cenv
wherein C is the adjusted color of the initial hair model, CparaTo calculate intermediate parameters of the adjusted color of the initial hair model, CbaseIs the base color of the initial hair model, CTDyeing of the initial hair model, CsunIs sunlight, CenvIs ambient light.
10. An apparatus for generating a hair model, the apparatus comprising:
the acquisition module is used for acquiring a preset number of hair patches;
the shifting module is used for shifting the vertex of the hair patch along the normal direction of the hair patch, performing transparency processing on the hair patch obtained after shifting, and superposing a plurality of hair patches obtained after transparency processing to generate an initial hair model;
the control module is used for controlling the growth direction of the initial hair model to extend along the normal direction corresponding to the normal map according to the normal map of the initial hair model;
and the rendering module is used for rendering the initial hair model based on the diffuse reflection map of the initial hair model to generate a target hair model.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1-9.
12. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1-9 via execution of the executable instructions.
Background
With the development of computer technology and the like, three-dimensional virtual objects become an important part in the fields of game production, animation production and the like, and are widely popular with the public because of rich and vivid visual effects.
Among these, in some three-dimensional virtual objects, such as characters, animals and other virtual objects, the hair model is a very important component, and may include, for example, the hair of a character, the hair of an animal, the hair of a plush material, etc. In the existing hair model making, the color for deepening the hair root is selected to make the hair model present the layering sense and the stereoscopic impression, but at the same time, the hair root is too dark, and the visual effect of the whole hair model is poor.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure provides a method for generating a hair model, a device for generating a hair model, a computer-readable storage medium, and an electronic device, thereby at least to some extent improving the problem of poor visual effect of the prior art hair model.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a method of generating a hair model, the method comprising: acquiring a preset number of hair patches; shifting the top point of the hair patch along the normal direction of the hair patch, performing transparency processing on the hair patch obtained after shifting, and superposing a plurality of hair patches obtained after transparency processing to generate an initial hair model; controlling the growth direction of the initial hair model to extend along the normal direction corresponding to the normal map according to the normal map of the initial hair model; rendering the initial hair model based on the diffuse reflection map of the initial hair model to generate a target hair model.
In an exemplary embodiment of the present disclosure, the obtaining a preset number of hair patches includes: obtaining a noise map, wherein the noise map comprises a plurality of noise points which are used for representing the distribution area of the hair; and copying the noise maps until a preset copying number is reached to obtain the hair patches with the preset number.
In an exemplary embodiment of the present disclosure, the shifting the vertex of the hair patch along the normal direction of the hair patch and performing transparency processing on the shifted hair patch includes: shifting the top point of the hair patch along the normal direction of the hair patch according to a preset offset value until reaching a preset shifting frequency; for each hair patch obtained after the shift, carrying out attenuation processing on the opacity of the hair patch along the preset direction of the hair patch according to the shift sequence of the hair patch; the attenuation amount of the opacity of the previous hair patch is smaller than that of the opacity of the next hair patch, and the preset direction is the direction outward along the radial direction of the hair patch.
In an exemplary embodiment of the present disclosure, when the opacity of the hair patch is attenuated along the preset direction of the hair patch in the offset order of the hair patch, the method further includes: determining an initial attenuation amount of the opacity of the hair patch according to the offset sequence of the hair patch, wherein the initial attenuation amount of the opacity of a previous hair patch is smaller than the initial attenuation amount of the opacity of a next hair patch; and performing attenuation processing on the opacity of the hair patch based on the initial attenuation amount.
In an exemplary embodiment of the present disclosure, the method further comprises: the transparency of the resulting hair patch after each shift is calculated by the following formula:
Alpha=(Noise*2-(Fur_offset*Fur_offset+(Fur_offset*Furmask*5)))*Timing+FurOpacity
the method comprises the steps of obtaining a hair patch, obtaining a color value of the hair patch, obtaining a layered control quantity of the hair patch, obtaining a color value of the Noise patch, obtaining the color value of the color patch, obtaining the color value of the color.
In an exemplary embodiment of the present disclosure, when the vertex of the hair patch is shifted in a normal direction of the hair patch, the method further includes: determining the deviation value of the hair patch obtained after the deviation in a preset external force direction according to the deviation sequence of the hair patch obtained after the deviation, wherein the preset external force direction comprises a preset gravity direction and/or a preset wind direction; and controlling the hair patch obtained after the deviation to deviate the deviation value along the preset external force direction.
In an exemplary embodiment of the present disclosure, the controlling, according to the normal map of the initial hair model, the growth direction of the initial hair model to extend along the normal direction corresponding to the normal map includes: acquiring a vertex normal of the initial hair model; and replacing the vertex normal with a normal map of the initial hair model to control the growth direction of the initial hair model to extend along the normal direction corresponding to the normal map.
In an exemplary embodiment of the present disclosure, the rendering the initial hair model based on the diffuse reflection map of the initial hair model to generate a target hair model includes: setting a base color of the initial hair model by the diffuse reflection map; and calculating an adjusting color of the initial hair model according to the basic color of the initial hair model, and rendering the initial hair model according to the adjusting color to generate the target hair model.
In an exemplary embodiment of the present disclosure, the calculating an adjusted color of the initial hair model from the base color of the initial hair model includes: calculating an adjusted color of the initial hair model by:
Cpara=Cbase*CT
C=Cpara*Csun+Cenv
wherein C is the adjusted color of the initial hair model, CparaTo calculate intermediate parameters of the adjusted color of the initial hair model, CbaseIs the base color of the initial hair model, CTDyeing of the initial hair model, CsunIs sunlight, CenvIs ambient light.
According to a second aspect of the present disclosure, there is provided a hair model generation apparatus; the device comprises: the acquisition module is used for acquiring a preset number of hair patches; the shifting module is used for shifting the vertex of the hair patch along the normal direction of the hair patch, performing transparency processing on the hair patch obtained after shifting, and superposing a plurality of hair patches obtained after transparency processing to generate an initial hair model; the control module is used for controlling the growth direction of the initial hair model to extend along the normal direction corresponding to the normal map according to the normal map of the initial hair model; and the rendering module is used for rendering the initial hair model based on the diffuse reflection map of the initial hair model to generate a target hair model.
In an exemplary embodiment of the disclosure, the obtaining module is configured to obtain a noise map, where the noise map includes a plurality of noise points, and the plurality of noise points are used to represent distribution areas of hairs, and copy the noise map until a preset number of times of copying is reached, so as to obtain the preset number of hair patches.
In an exemplary embodiment of the disclosure, the shifting module is configured to shift vertices of the hair patch along a normal direction of the hair patch by a preset shift value until reaching a preset shift number, and for each hair patch obtained after the shifting, perform an attenuation process on an opacity of the hair patch along a preset direction of the hair patch according to a shift order of the hair patch, where an attenuation amount of the opacity of a previous hair patch is smaller than an attenuation amount of the opacity of a next hair patch, and the preset direction is a direction radially outward of the hair patch.
In an exemplary embodiment of the disclosure, when the opacity of the hair patch is attenuated in the preset direction of the hair patch according to the shift order of the hair patch, the shift module is further configured to determine an initial attenuation amount of the opacity of the hair patch according to the shift order of the hair patch, wherein the initial attenuation amount of the opacity of a previous hair patch is smaller than the initial attenuation amount of the opacity of a next hair patch, and the opacity of the hair patch is attenuated based on the initial attenuation amount.
In an exemplary embodiment of the disclosure, the offset module is further configured to calculate the transparency of the hair patch obtained after each offset by the following formula:
Alpha=(Noise*2-(Fur_offset*Fur_offset+(Fur_offset*Furmask*5)))*Timing+FurOpacity
the method comprises the steps of obtaining a hair patch, obtaining a color value of the hair patch, obtaining a layered control quantity of the hair patch, obtaining a color value of the Noise patch, obtaining the color value of the color patch, obtaining the color value of the color.
In an exemplary embodiment of the disclosure, when the vertex of the hair patch is shifted along the normal direction of the hair patch, the shifting module is further configured to determine, according to a shifting sequence of the hair patch obtained after the shifting, a shifting value of the hair patch obtained after the shifting in a preset external force direction, where the preset external force direction includes a preset gravity direction and/or a preset wind direction, and control the hair patch obtained after the shifting to shift along the preset external force direction by the shifting value.
In an exemplary embodiment of the disclosure, the control module is configured to obtain a vertex normal of the initial hair model, and replace the vertex normal with a normal map of the initial hair model, so as to control a growth direction of the initial hair model to extend along a normal direction corresponding to the normal map.
In an exemplary embodiment of the present disclosure, the rendering module is configured to set a basic color of the initial hair model through the diffuse reflection map, calculate an adjustment color of the initial hair model according to the basic color of the initial hair model, and render the initial hair model according to the adjustment color to generate the target hair model.
In an exemplary embodiment of the disclosure, the rendering module is further configured to calculate the adjusted color of the initial hair model by the following formula:
Cpara=Cbase*CT
C=Cpara*Csun+Cenv
wherein C is the adjusted color of the initial hair model, CparaTo calculate intermediate parameters of the adjusted color of the initial hair model, CbaseIs the base color of the initial hair model, CTDyeing of the initial hair model, CsunIs sunlight, CenvIs ambient light.
According to a third aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the above-described methods of generating a hair model.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform any one of the above-described methods of hair model generation via execution of the executable instructions.
The present disclosure has the following beneficial effects:
according to the hair model generation method, the hair model generation device, the computer-readable storage medium, and the electronic device in the exemplary embodiment, vertices of a hair patch may be shifted along a normal direction of the obtained hair patch, transparency processing may be performed on the shifted hair patch, a plurality of hair patches obtained after the transparency processing may be superimposed to generate an initial hair model, a growth direction of the initial hair model may be controlled to extend along a normal direction corresponding to the normal map according to the normal map of the initial hair model, and the initial hair model may be rendered based on a diffuse reflection map of the initial hair model to generate a target hair model. On one hand, the initial hair model is generated by shifting the vertexes of the hair patches, performing transparency processing on the hair patches obtained after shifting, and superposing a plurality of hair patches subjected to transparency processing, so that a maker does not need to manually draw hairs, the efficiency of making the hair model can be greatly improved, and particularly, when a complicated three-dimensional virtual object is made, a highly fine hair model can be generated; on the other hand, the growth direction of the initial hair model is controlled to extend along the normal direction corresponding to the normal map according to the normal map of the initial hair model, so that the hair direction can be accurately controlled, and the hair model is prevented from being distorted.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is apparent that the drawings in the following description are only some embodiments of the present disclosure, and that other drawings can be obtained from those drawings without inventive effort for a person skilled in the art.
FIG. 1 shows a flow chart of a method of generating a hair model in the present exemplary embodiment;
FIG. 2 shows a schematic diagram of a noise map in the present exemplary embodiment;
FIG. 3 shows a schematic diagram of one generation of a hair structure in this exemplary embodiment;
FIG. 4 illustrates a sub-flowchart of a method of generating a hair model in the present exemplary embodiment;
FIGS. 5A and 5B show a schematic diagram of a transparency-attenuating hair in the present exemplary embodiment;
FIGS. 6A and 6B are schematic diagrams illustrating localized hair regions before and after an external force deflection in the present exemplary embodiment;
FIG. 7 is a schematic diagram illustrating the effect of a hair model after an external force is deflected in the present exemplary embodiment;
FIGS. 8A and 8B are schematic views each showing a hair model before and after correction of an aberration in the present exemplary embodiment;
FIG. 9 shows a schematic diagram of a hair model after base coloring in this exemplary embodiment;
FIG. 10 shows a schematic diagram of a target hair model in this exemplary embodiment;
FIG. 11A and FIG. 11B are diagrams illustrating an adjustment of a target hair model before and after color rendering, respectively, in the present exemplary embodiment;
FIG. 12 is a block diagram showing the configuration of a hair model generation apparatus in the present exemplary embodiment;
FIG. 13 illustrates a computer-readable storage medium for implementing the above-described method in the present exemplary embodiment;
fig. 14 shows an electronic device for implementing the above method in the present exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In recent years, the performance of terminal devices has been increasing, and nowadays, the terminal devices are capable of bearing more elaborate picture presentations of three-dimensional virtual objects. On the basis, in order to meet the visual requirements of users on exquisite pictures and present exquisite and fine hair effects, manufacturers need to continuously optimize the visual effects of the hair models.
Based on this, the exemplary embodiments of the present disclosure first provide a method of generating a hair model. The method can be applied to electronic equipment, and can process the acquired hair patch to generate a target hair model, and the target hair model can be arranged on the surface of the three-dimensional virtual object, so that the three-dimensional virtual object presents a corresponding hair effect. The electronic device may be a terminal device or a server, the terminal device may be a computer, a tablet computer, a smart phone, or the like, and the server may be a single server or a server cluster formed by physical servers, or a cloud server providing cloud computing services, or the like.
The method for generating a hair model according to the present exemplary embodiment may be generally performed by a terminal device, and accordingly, the hair model generating means may be provided in the terminal device. However, it is easily understood by those skilled in the art that the method for generating the hair model provided in the present exemplary embodiment may also be executed by a server, and accordingly, the hair model generating device may be disposed in the server, and the present exemplary embodiment is not particularly limited thereto. For example, in an alternative embodiment, the terminal device may execute the hair model generation method in the present exemplary embodiment to process the acquired hair patch to generate the target hair model, or the server may receive the hair patch transmitted by the terminal device and process the hair patch by executing the hair model generation method in the present exemplary embodiment to generate the target hair model, and then transmit the target hair model to the terminal device.
Fig. 1 shows a flow of the present exemplary embodiment, which may include the following steps S110 to S140:
and S110, acquiring a preset number of hair patches.
The hair patch refers to a patch that can be used to make a hair model, and may be a triangular patch, a quadrangular patch, or any other patch. In addition, in the present exemplary embodiment, the thickness of the hair patch is negligible.
Generally, a hair patch may be generated in advance by a producer, for example, patch parameters such as shape, side length, etc. may be input into three-dimensional model software to generate a corresponding hair patch, or a default patch may be selected from the three-dimensional model software as the hair patch.
In fact, each piece of hair patch can generate several numbers of hairs, and in order to increase the convenience of generating multiple hairs and increase the density of the hairs in the generated hair model, in an alternative embodiment, step S110 can be implemented by:
acquiring a noise map;
and copying the noise maps until the preset copying times are reached to obtain the hair patches with the preset number.
Where the noise map may include a plurality of noise points that may be used to represent the distribution area of the hair, for example, referring to fig. 2, each cell or noise point is a hair. The preset copying times can be determined by a maker according to the number of noise points contained in each noise map, when the number of noise points of the noise map is large, the hair patches required by a large number of hairs can be obtained by copying fewer times, and conversely, when the number of noise points of the noise map is small, the hair patches required by a large number of hairs can be obtained by copying more times.
By the method, the operation flow of generating the hair patches can be simplified, and the efficiency of generating the hair patches with the preset number is improved.
And S120, offsetting the top points of the hair patches along the normal direction of the hair patches, performing transparency processing on the hair patches obtained after offsetting, and superposing a plurality of hair patches obtained after transparency processing to generate an initial hair model.
The normal direction of a hair patch refers to a direction perpendicular to the hair patch, and the vertex of the hair patch can be set by a producer using a corresponding shader. One or more vertices may be selected in a piece of hair patch, for example, any one of each noise point in a piece of hair patch may be set as a vertex, thereby obtaining a plurality of vertices. And offsetting the top point of the hair patch along the normal direction of the hair patch, and further performing transparency treatment on the offset hair patch to enable the edge of the offset hair patch to present more and more transparent areas, so that the superposed hair patch presents the characteristic of thickness from thick to thin, and the visual hair effect is obtained. For example, fig. 3 shows a schematic diagram of generating a hair model, each triangular sheet-like structure can be regarded as a hair patch, a hair patch is subjected to vertex shift to obtain a plurality of hair patches distributed in parallel with the hair patch, transparency processing is performed on the hair patch obtained after shift, so that an opaque area of the hair patch after shift becomes smaller and smaller, and the superposed hair patches present the shape of the hair under a condition of closer distance.
By the method, the top points are extruded out of the surface of the hair patch along the normal direction of the hair patch, then the hair patch obtained after extrusion is subjected to transparency processing, and the plurality of hair patches after processing are further superposed to generate the initial hair model, so that the hair is not required to be drawn manually by a maker, the efficiency of making the hair model can be greatly improved, and particularly, when a complex three-dimensional virtual object is made, a highly fine hair model can be generated.
Specifically, in an alternative embodiment, referring to fig. 4, step S120 may be implemented by the following steps S410 to S420:
and step S410, shifting the vertex of the hair patch according to a preset offset value along the normal direction of the hair patch until reaching the preset offset times.
The preset offset value may be calculated by a corresponding function, or may be directly set to a fixed value, that is, the hair patch is offset by a fixed distance in the normal direction of the hair patch each time until the preset offset times are reached.
For each hair patch, the vertex of the hair patch can be shifted by a preset offset value along the normal direction of the hair patch each time until reaching a preset shift number, so as to obtain one or more hair patches shifted by one hair patch, wherein the offset values of two adjacent hair patches differ by one preset offset value. For a more complex three-dimensional virtual object, such as a three-dimensional virtual object in a game, the three-dimensional virtual object usually needs to be shifted 20-30 times to enable the superimposed hair patch to show the effect of hair.
In step S420, for each of the hair patches obtained after the shift, attenuation processing is performed on the opacity of the hair patch along the preset direction of the hair patch according to the shift order of the hair patch.
The attenuation amount of the opacity of the previous hair patch is smaller than that of the opacity of the next hair patch, and the preset direction is the outward direction in the radial direction of the hair patch.
Specifically, for each hair patch obtained after the shift, for example, for a hair patch in the shift order i, the opacity attenuation processing may be performed on the hair patch in the preset direction according to the number of shifts of the hair patch, so that in the transparent channel, the part of the hair patch closer to the edge is more transparent (the more black parts are, the full black indicates full transparency). Then, the attenuation processing of the opacity is continued for the next hair patch, i.e., the hair patch in the offset order i +1, and at this time, the attenuation of the opacity of the hair patch is larger than that of the hair patch in the offset order i, that is, the attenuation of the opacity of the obtained hair patch becomes larger and the transparent area becomes larger as the offset order is later, and thus, the transparent area is regarded as the cut-off area, and the adjacent two hair patches exhibit the effect of being larger or smaller. Referring to fig. 5A, the opacity attenuation of the hair patch closer to the upper part is larger, the hair patch looks smaller, and the hair patches after each shift are sequentially superimposed, so that all the hair patches form a conical structure, i.e., the hair structure shown in fig. 5B is obtained.
Further, in an optional implementation manner, in step S420, the following method may also be performed:
determining an initial attenuation amount of the opacity of the hair patch according to the offset sequence of the hair patch;
and performing attenuation processing on the opacity of the hair patch based on the initial attenuation amount.
Wherein an initial attenuation of the opacity of a previous hair patch is less than an initial attenuation of the opacity of a subsequent hair patch. The difference between the initial attenuation amount and the initial attenuation amount of the opacity of the hair patch obtained after two adjacent shifts can be set by a manufacturer, for example, as the number of shifts increases, the initial attenuation amount of the hair patch obtained after the shift also increases, and the initial attenuation amounts of the opacity of the hair patch obtained after two adjacent shifts can differ by a fixed value.
For example, after the hair patch is subjected to the offset processing, each hair patch may be rendered pixel by pixel, and an initial attenuation amount of opacity of each hair patch may be determined during rendering, and each pixel may be rendered in a radially outward direction of the hair patch according to the initial attenuation amount, for example, an attenuation amount of opacity of the hair patch of the current pixel may be determined according to the initial attenuation amount, and then a transparency value of the current pixel may be determined and set, so that each hair patch tends to have a higher and higher transparency from the center point outward.
In this exemplary embodiment, after the vertices of the hair patch are shifted by the preset shift number of times according to the preset shift value, that is, all the shifted hair patches are obtained, each hair patch may be subjected to opacity reduction processing, or after one shift, the hair patch obtained by this shift may be subjected to opacity reduction processing. In addition, when determining the extension range of a hair, the vertex color of a hair patch can be used as a mask to control the growth range of the hair, i.e., where the hair is present and where the hair is absent.
Further, in order to improve the efficiency of calculating the transparency of the hair patch, in an alternative embodiment, the transparency of the hair patch after each shift can be calculated by the following formula:
Alpha=(Noise*2-(Fur_offset*Fur_offset+(Fur_offset*Furmask*5)))*Timing+FurOpacity
the transparency of the hair patch obtained after the ith offset is Alpha, the Noise is a value of an R channel of the Noise map, the Fur _ offset is a hierarchical control quantity of the hair patch obtained after the ith offset, the numeric range of the hierarchical control quantity is [0,1], the Fur _ mask is a controllable variable of a mask, the hierarchical control quantity can be used for controlling the growth range of the hair, namely determining where the hair exists and where the hair does not exist, the Timing is a controllable variable, and the Fur access is another controllable variable, and the hierarchical control quantity can be used for optimizing the display effect of the hair model. In the present exemplary embodiment, the value ranges of the controllable variables Timing and FurOpacity may be set to [0,1] in general.
By the method, the transparency of the hair patch obtained after each offset can be rapidly calculated, and the efficiency of generating the hair model is improved.
Further, in an alternative embodiment, when the vertex of the hair patch is shifted along the normal direction of the hair patch, the following method may be further performed:
determining the deviation value of the hair patch obtained after the deviation in the preset external force direction according to the deviation sequence of the hair patch obtained after the deviation;
and controlling the hair patch obtained after the deviation to deviate from the deviation value along the preset external force direction.
The preset external force direction may include a preset gravity direction and/or a preset wind direction, and may be configured in advance by a manufacturer.
In order to enhance the sense of reality of the hair model and create the visual effect that the hair flies in a certain direction, the deviation value of the hair patch obtained after the deviation in the preset external force direction can be determined according to the deviation times of the hair patch obtained after the deviation, for example, the deviation value of the hair patch obtained after each deviation in the preset gravity direction or the preset wind direction can be determined according to the deviation sequence, so that the hair patch obtained after each deviation is controlled to deviate a certain amount in the preset gravity direction or the preset wind direction, namely, the deviation value. Thus, the hair model exhibits a characteristic of being continuously shifted in a certain direction from the root to the tip.
For example, according to the shifting sequence of the hair patches, the hair patch obtained after the first shifting may be shifted by 0.1 in the preset gravity direction or the preset wind direction, the hair patch obtained after the second shifting may be shifted by 0.2 in the preset gravity direction or the preset wind direction, and the hair patch obtained after the third shifting may be shifted by 0.3 … … in the preset gravity direction or the preset wind direction until all the hair patches are processed. Referring to fig. 6A, when no external force is applied for deflection, the hair extends in a straight line from the root to the tip, and after the external force is applied for deflection, the hair shows a visual effect of being continuously deflected in a certain direction from the root to the tip as shown in fig. 6B. Fig. 7 shows a schematic view of an initial hair model in the present exemplary embodiment, and it can be seen that the closer to the outer hairs, the larger the magnitude of the deviation to the gravity direction or the wind direction for the entire initial hair model. By the method, the sense of reality of the hair model can be enhanced, and the visual effect of the hair model is improved.
And S130, controlling the growth direction of the initial hair model to extend along the normal direction corresponding to the normal map according to the normal map of the initial hair model.
The normal map of the initial hair model is to make a normal at each point of the surface of the initial hair model, and mark the direction of the normal by RGB (one color channel) color channels.
Since the normal map can represent the texture of the original hair model surface and as an extension of the concave-convex texture, the normal map can make each pixel of each plane have a height value and contain richer surface detail information. Therefore, the generation direction of the initial hair model may be controlled to extend along the normal direction corresponding to the normal map by mapping the normal map baked out of the initial hair model, and pasting the normal map on the normal map channel of the initial hair model.
Specifically, in an alternative embodiment, step S130 may be implemented by the following method:
acquiring a vertex normal of the initial hair model;
and replacing the vertex normal by the normal map of the initial hair model to control the growth direction of the initial hair model to extend along the normal direction corresponding to the normal map.
The vertex normal of the initial hair model is a vector passing through the vertex, and a smooth effect can be obtained on the surface of the polyhedron in illumination calculation. By replacing the original vertex normal with the normal map of the initial hair model to control the growth direction of the initial hair model to extend along the normal direction corresponding to the normal map, the initial hair model can have smooth surface texture and distortion is avoided. Fig. 8A and 8B are diagrams illustrating a hair model before and after distortion correction according to the exemplary embodiment, respectively, in which an initial hair model using a vertex normal as a growth direction of a hair generates a large degree of distortion as shown in fig. 8A, and an initial hair model using a normal map instead of an original vertex normal exhibits a correct hair structure as shown in fig. 8B.
By the method, the normal map can be used for replacing the vertex normal of the original initial hair model, the distortion problem of the initial hair model can be solved, and compared with the vertex normal, the initial hair model can show a smooth effect by using the normal map, so that the visual display effect is better.
And S140, rendering the initial hair model based on the diffuse reflection map of the initial hair model to generate a target hair model.
Among them, the Diffuse reflection Map (Diffuse Map) may represent the color and reflection condition of the surface of the original hair model, that is, the Diffuse reflection Map may represent the color and intensity of the original hair model displayed by being irradiated with light.
In rendering, the initial hair model may be rendered according to the diffuse reflection map of the initial hair model, such as by coloring or adding light to the initial hair model to generate the target hair model.
In an alternative embodiment, step S140 may be implemented by:
setting the basic color of the initial hair model through the diffuse reflection map;
and calculating the adjustment color of the initial hair model according to the basic color of the initial hair model, and rendering the initial hair model according to the adjustment color to generate the target hair model.
The basic color refers to the basic color of the initial hair model, and the adjustment color may include the hair color under the irradiation conditions of ambient light, contour light, sunlight, and the like. In the present exemplary embodiment, the basic color of the initial hair model may be rendered according to the UV expansion result of the initial hair model, and further, in order to enhance the color effect, the illumination effect, and the like of the initial hair model, an adjustment color may be calculated according to the basic color of the initial hair model, and the initial hair model may be rendered according to the adjustment color to generate the target hair model. For example, referring to FIG. 9, the hair model after the base coloring treatment exhibits the visual effect shown in FIG. 9, as compared to the initial hair model before uncolored shown in FIG. 7.
Further, in an alternative embodiment, the adjusted color of the initial hair model may be calculated by the following formula:
Cpara=Cbase*CT
C=Cpara*Csun+Cenv
wherein C is the adjusted color of the initial hair model, CparaTo calculate intermediate parameters for adjusting the color of the initial hair model, CbaseBeing the base color of the initial hair model, CTDyeing of the original Hair model, CsunIs sunlight, CenvIs ambient light.
Specifically, after the initial hair model is subjected to basic coloring, the adjustment color of the initial hair model can be calculated, and the hair model after the basic coloring is subjected to rendering coloring again, so as to obtain the hair model shown in fig. 10, namely the target hair model. It can be seen that the target hair model has richer light and shade details and color effects than the base colored hair model.
By the method, the brightness of the root of the initial hair model can be improved, so that the hair model has better visual effect on the basis of ensuring the layering sense and the stereoscopic impression of the hair model, and particularly, the problem of too deep dark hair part can be obviously improved for light-colored hair. For example, fig. 11A shows a schematic diagram of a target hair model without an adjustment color-rendering process, and fig. 11B shows a schematic diagram of a target hair model after an adjustment color-rendering process, and it can be seen that the target hair model in fig. 11B significantly improves the problem of the root of the hair model being too dark by performing an adjustment color-rendering process, compared to the target hair model in fig. 11A.
In summary, according to the method for generating a hair model in the exemplary embodiment, the vertices of the hair patches may be shifted along the normal direction of the obtained hair patches, the hair patches obtained after shifting are subjected to transparency processing, a plurality of hair patches obtained after transparency processing are superimposed to generate an initial hair model, the growth direction of the initial hair model is controlled to extend along the normal direction corresponding to the normal map according to the normal map of the initial hair model, and the initial hair model is rendered based on the diffuse reflection map of the initial hair model to generate a target hair model. On one hand, the initial hair model is generated by shifting the vertexes of the hair patches, performing transparency processing on the hair patches obtained after shifting, and superposing a plurality of hair patches subjected to transparency processing, so that a maker does not need to manually draw hairs, the efficiency of making the hair model can be greatly improved, and particularly, when a complicated three-dimensional virtual object is made, a highly fine hair model can be generated; on the other hand, the growth direction of the initial hair model is controlled to extend along the normal direction corresponding to the normal map according to the normal map of the initial hair model, so that the hair direction can be accurately controlled, and the hair model is prevented from being distorted.
The present exemplary embodiment also provides a hair model generation apparatus, and as shown in fig. 12, the hair model generation apparatus 1200 may include: an obtaining module 1210, configured to obtain a preset number of hair patches; the shifting module 1220 may be configured to shift a vertex of a hair patch along a normal direction of the hair patch, perform transparency processing on the hair patch obtained after the shifting, and superimpose a plurality of hair patches obtained after the transparency processing to generate an initial hair model; a control module 1230, configured to control the growth direction of the initial hair model to extend along a normal direction corresponding to the normal map according to the normal map of the initial hair model; the rendering module 1240 may be configured to render the initial hair model based on the diffuse reflection map of the initial hair model to generate the target hair model.
In an exemplary embodiment of the disclosure, the obtaining module 1210 may be configured to obtain a noise map, where the noise map includes a plurality of noise points, and the plurality of noise points may be used to represent a distribution area of hairs, and copy the noise map until a preset number of copies is reached to obtain a preset number of hair patches.
In an exemplary embodiment of the disclosure, the shifting module 1220 may be configured to shift the vertex of the hair patch according to a preset shift value along a normal direction of the hair patch until reaching a preset shift number, and for each hair patch obtained after the shift, perform an attenuation process on the opacity of the hair patch along a preset direction of the hair patch according to a shift sequence of the hair patch, where an attenuation amount of the opacity of a previous hair patch is smaller than an attenuation amount of the opacity of a next hair patch, and the preset direction is a direction radially outward of the hair patch.
In an exemplary embodiment of the disclosure, when the opacity of the hair patch is attenuated along the preset direction of the hair patch according to the shifting sequence of the hair patch, the shifting module 1220 may be further configured to determine an initial attenuation amount of the opacity of the hair patch according to the shifting sequence of the hair patch, where the initial attenuation amount of the opacity of a previous hair patch is smaller than the initial attenuation amount of the opacity of a next hair patch, and the opacity of the hair patch is attenuated based on the initial attenuation amount.
In an exemplary embodiment of the disclosure, the offset module 1220 may be further configured to calculate the transparency of the hair patch obtained after each offset by the following formula:
Alpha=(Noise*2-(Fur_offset*Fur_offset+(Fur_offset*Furmask*5)))*Timing+FurOpacity
the method comprises the steps of obtaining a hair patch, obtaining a color value of the color patch.
In an exemplary embodiment of the disclosure, when the vertex of the hair patch is shifted along the normal direction of the hair patch, the shifting module 1220 may be further configured to determine, according to a shifting sequence of the hair patch obtained after shifting, a shifting value of the hair patch obtained after shifting in a preset external force direction, where the preset external force direction includes a preset gravity direction and/or a preset wind direction, and control the hair patch obtained after shifting to shift along the preset external force direction by the shifting value.
In an exemplary embodiment of the disclosure, the control module 1230 may be configured to obtain a vertex normal of the initial hair model, and replace the vertex normal with a normal map of the initial hair model to control the growth direction of the initial hair model to extend along a normal direction corresponding to the normal map.
In an exemplary embodiment of the present disclosure, the rendering module 1240 may be configured to set a base color of the initial hair model by the diffuse reflection map, calculate an adjustment color of the initial hair model according to the base color of the initial hair model, and render the initial hair model according to the adjustment color to generate the target hair model.
In an exemplary embodiment of the present disclosure, the rendering module 1240 may be further configured to calculate the adjusted color of the initial hair model by the following formula:
Cpara=Cbase*CT
C=Cpara*Csun+Cenv
wherein C is the adjusted color of the initial hair model, CparaTo calculate intermediate parameters for adjusting the color of the initial hair model, CbaseBeing the base color of the initial hair model, CTDyeing of the original Hair model, CsunIs sunlight, CenvIs ambient light.
The specific details of each module in the above apparatus have been described in detail in the method section, and details of an undisclosed scheme may refer to the method section, and thus are not described again.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the above-mentioned "exemplary methods" section of this specification, when the program product is run on the terminal device.
Referring to fig. 13, a program product 1300 for implementing the above method according to an exemplary embodiment of the present disclosure is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Program product 1300 may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The exemplary embodiment of the present disclosure also provides an electronic device capable of implementing the above method. An electronic device 1400 according to such exemplary embodiments of the present disclosure is described below with reference to fig. 14. The electronic device 1400 shown in fig. 14 is only an example and should not bring any limitations to the function and scope of use of the disclosed embodiments.
As shown in fig. 14, the electronic device 1400 may take the form of a general purpose computing device. The components of the electronic device 1400 may include, but are not limited to: the at least one processing unit 1410, the at least one memory unit 1420, the bus 1430 that connects the various system components (including the memory unit 1420 and the processing unit 1410), and the display unit 1440.
Where storage unit 1420 stores program code, the program code may be executed by processing unit 1410 such that processing unit 1410 performs steps according to various exemplary embodiments of the present disclosure described in the "exemplary methods" section above in this specification. For example, processing unit 1410 may perform the method steps shown in fig. 1 and 4, and so on.
The storage unit 1420 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)1421 and/or a cache memory unit 1422, and may further include a read only memory unit (ROM) 1423.
Storage unit 1420 may also include a program/utility 1424 having a set (at least one) of program modules 1425, such program modules 1425 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 1430 may be any type of bus structure including a memory cell bus or memory cell controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 1400 may also communicate with one or more external devices 1500 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 1400, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 1400 to communicate with one or more other computing devices. Such communication can occur via an input/output (I/O) interface 1450. Also, the electronic device 1400 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 1460. As shown, the network adapter 1460 communicates with the other modules of the electronic device 1400 via the bus 1430. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 1400, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, according to exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the exemplary embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the exemplary embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.