Audio editing method, device, equipment and readable storage medium
1. A method for audio editing, the method comprising:
displaying an audio editing page, wherein the audio editing page is a webpage displayed in a browser program and comprises a pitch reference area;
displaying, in the pitch reference region, note blocks of a process audio, the note blocks being used to compose the process audio, the process audio being audio produced in the pitch reference region;
receiving an audio adjustment operation on the audio editing page, wherein the audio adjustment operation is used for adjusting audio parameters of the process audio;
generating a result audio based on the audio adjustment operation on the process audio.
2. The method of claim 1, wherein the audio editing page comprises a sound source setting component;
the receiving an audio adjustment operation on the audio editing page comprises:
receiving a first audio adjustment operation on the sound source setting component, wherein the first audio adjustment operation is used for determining the tone of the process audio.
3. The method of claim 2, wherein the receiving the first audio adjustment operation at the audio source setting component comprises:
receiving a selection operation on the sound source setting component;
displaying a sound source candidate item based on the selection operation, wherein the sound source candidate item is an item corresponding to a pre-stored tone color adjustment mode;
and receiving a selection operation on the target sound source candidate as the first audio adjusting operation.
4. The method of claim 2, wherein the receiving the first audio adjustment operation at the audio source setting component comprises:
receiving a selection operation on the audio setting component;
displaying an audio recording page based on the selection operation, wherein the audio recording page is used for collecting sample audio through an audio input device;
and responding to the completion of the sample audio recording, and receiving a sound source generation operation as the first audio adjusting operation.
5. The method of claim 1, wherein the audio editing page includes a tempo setting component therein;
the receiving an audio adjustment operation on the audio editing page comprises:
receiving a second audio adjustment operation on the music speed setting component, wherein the second audio adjustment operation is used for determining the music score playing speed of the process audio.
6. The method of claim 1, wherein a pitch adjustment component is included in the audio editing page;
the receiving an audio adjustment operation on the audio editing page comprises:
receiving a third toned operation on the pitch adjustment component, the third toned operation for determining the pitch of a note block of the process audio.
7. The method of claim 6, wherein the receiving a third pitch adjustment operation on the pitch adjustment component comprises:
receiving a selection operation on the pitch adjustment assembly;
displaying a pitch contour based on the selection operation, the pitch contour being an indication line for expressing a pitch case generated from the note block;
receiving a drag adjustment operation for the pitch contour as the third pitch adjustment operation.
8. The method of claim 1, wherein the audio editing page includes a ventilation settings component therein;
the receiving an audio adjustment operation on the audio editing page comprises:
receiving a fourth audio adjustment operation on the ventilation setting component, the fourth audio adjustment operation being used to add a ventilation event to the sound source utterance in the process audio.
9. The method of any of claims 1 to 8, wherein prior to displaying the note blocks of process audio in the pitch reference region, further comprising:
receiving an audio import operation for importing, in the pitch reference region, note blocks of candidate audio, the candidate audio being audio that is stored or known to be addressed;
alternatively, the first and second electrodes may be,
receiving a note block drawing operation in the pitch reference region, the note block drawing operation to create a note block corresponding to the pitch reference region.
10. The method according to any one of claims 1 to 8, further comprising:
receiving a first shortcut key operation;
and canceling the last audio adjustment operation before the current moment based on the first shortcut key operation.
11. The method of any of claims 9, further comprising:
receiving a second shortcut key operation;
and restoring the audio adjusting operation which is cancelled last time before the current time based on the second shortcut key operation.
12. A method for audio editing, the method comprising:
receiving audio data of a process audio, wherein the process audio is an audio to be edited in an audio editing page of the terminal, and the audio editing page is a webpage displayed in a browser program of the terminal;
receiving an audio adjusting signal, wherein the audio adjusting signal is a signal sent to a server when the terminal receives an audio adjusting operation on the audio editing page;
adjusting audio parameters of the process audio based on the audio adjustment signal;
receiving an audio generation signal, wherein the audio generation signal is used for indicating that result audio is generated on the basis of current process audio;
and feeding back the result audio to the terminal based on the audio generation signal.
13. An audio editing apparatus, characterized in that the apparatus comprises:
the display module is used for displaying an audio editing page, wherein the audio editing page is a webpage displayed in a browser program and comprises a pitch reference area;
the display module further to display, in the pitch reference region, note blocks of a process audio, the note blocks to constitute the process audio, the process audio being audio produced in the pitch reference region;
a receiving module, configured to receive an audio adjustment operation on the audio editing page, where the audio adjustment operation is used to adjust an audio parameter of the process audio;
a generation module to generate a result audio based on the audio adjustment operation on the process audio.
14. An audio editing apparatus, characterized in that the apparatus comprises:
the terminal comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving audio data of a process audio, the process audio is an audio to be edited in an audio editing page of the terminal, and the audio editing page is a webpage displayed in a browser program of the terminal;
the receiving module is further configured to receive an audio adjustment signal, where the audio adjustment signal is a signal sent to a server when the terminal receives an audio adjustment operation on the audio editing page;
an adjustment module to adjust an audio parameter of the process audio based on the audio adjustment signal;
the receiving module is further configured to receive an audio generation signal, where the audio generation signal is used to instruct that a result audio is generated on the basis of the current process audio;
and the sending module is used for feeding back the result audio to the terminal based on the audio generation signal.
15. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement an audio editing method as claimed in any one of claims 1 to 12.
16. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the audio editing method of any of claims 1 to 12.
Background
An audio editor is a software that composes audio by importing MIDI files.
In the related art, after finding the MIDI file corresponding to the audio to be created, or creating the MIDI file corresponding to the audio to be created, the creator imports the MIDI file into an audio editor, and performs parameter configuration operation, thereby generating the audio corresponding to the audio to be created as an creation result.
However, since the above method requires the creator to find the MIDI file or make the MIDI file to achieve audio creation, the obtaining process of the MIDI file is cumbersome, and the efficiency of audio creation is low.
Disclosure of Invention
The embodiment of the disclosure provides an audio editing method, an audio editing device, an audio editing apparatus and a readable storage medium, which can improve the efficiency and the diversity of audio creation. The technical scheme is as follows:
in one aspect, an audio editing method is provided, and the method includes:
displaying an audio editing page, wherein the audio editing page is a webpage displayed in a browser program and comprises a pitch reference area;
displaying, in the pitch reference region, note blocks of a process audio, the note blocks being used to compose the process audio, the process audio being audio produced in the pitch reference region;
receiving an audio adjustment operation on the audio editing page, wherein the audio adjustment operation is used for adjusting audio parameters of the process audio;
generating a result audio based on the audio adjustment operation on the process audio.
In an optional embodiment, the audio editing page includes a sound source setting component;
the receiving an audio adjustment operation on the audio editing page comprises:
receiving a first audio adjustment operation on the sound source setting component, wherein the first audio adjustment operation is used for determining the tone of the process audio.
In an alternative embodiment, the receiving the first audio adjustment operation on the audio source setting component includes:
receiving a selection operation on the sound source setting component;
displaying a sound source candidate item based on the selection operation, wherein the sound source candidate item is an item corresponding to a pre-stored tone color adjustment mode;
and receiving a selection operation on the target sound source candidate as the first audio adjusting operation.
In an alternative embodiment, the receiving the first audio adjustment operation on the audio source setting component includes:
receiving a selection operation on the audio setting component;
displaying an audio recording page based on the selection operation, wherein the audio recording page is used for collecting sample audio through an audio input device;
and responding to the completion of the sample audio recording, and receiving a sound source generation operation as the first audio adjusting operation.
In an optional embodiment, the audio editing page comprises a music speed setting component;
the receiving an audio adjustment operation on the audio editing page comprises:
receiving a second audio adjustment operation on the music speed setting component, wherein the second audio adjustment operation is used for determining the music score playing speed of the process audio.
In an alternative embodiment, a pitch adjustment component is included in the audio editing page;
the receiving an audio adjustment operation on the audio editing page comprises:
receiving a third toned operation on the pitch adjustment component, the third toned operation for determining the pitch of a note block of the process audio.
In an optional embodiment, the receiving a third pitch adjustment operation on the pitch adjustment component comprises:
receiving a selection operation on the pitch adjustment assembly;
displaying a pitch contour based on the selection operation, the pitch contour being an indication line for expressing a pitch case generated from the note block;
receiving a drag adjustment operation for the pitch contour as the third pitch adjustment operation.
In an optional embodiment, a ventilation setting component is included in the audio editing page;
the receiving an audio adjustment operation on the audio editing page comprises:
receiving a fourth audio adjustment operation on the ventilation setting component, the fourth audio adjustment operation being used to add a ventilation event to the sound source utterance in the process audio.
In an optional embodiment, before displaying the note block of the course audio in the pitch reference region, further comprising:
receiving an audio import operation for importing, in the pitch reference region, note blocks of candidate audio, the candidate audio being audio that is stored or known to be addressed;
alternatively, the first and second electrodes may be,
receiving a note block drawing operation in the pitch reference region, the note block drawing operation for creating a note block corresponding to the pitch reference region.
In an optional embodiment, the method further comprises:
receiving a first shortcut key operation;
and canceling the last audio adjustment operation before the current moment based on the first shortcut key operation.
In an optional embodiment, the method further comprises:
receiving a second shortcut key operation;
and restoring the audio adjusting operation which is cancelled last time before the current time based on the second shortcut key operation.
In another aspect, an audio editing method is provided, the method including:
receiving audio data of a process audio, wherein the process audio is an audio to be edited in an audio editing page of the terminal, and the audio editing page is a webpage displayed in a browser program of the terminal;
receiving an audio adjusting signal, wherein the audio adjusting signal is a signal sent to a server when the terminal receives an audio adjusting operation on the audio editing page;
adjusting audio parameters of the process audio based on the audio adjustment signal;
receiving an audio generation signal, wherein the audio generation signal is used for indicating that result audio is generated on the basis of current process audio;
and feeding back the result audio to the terminal based on the audio generation signal.
In another aspect, an audio editing apparatus is provided, the apparatus including:
the display module is used for displaying an audio editing page, wherein the audio editing page is a webpage displayed in a browser program and comprises a pitch reference area;
the display module further to display, in the pitch reference region, note blocks of a process audio, the note blocks to constitute the process audio, the process audio being audio produced in the pitch reference region;
a receiving module, configured to receive an audio adjustment operation on the audio editing page, where the audio adjustment operation is used to adjust an audio parameter of the process audio;
a generation module to generate a result audio based on the audio adjustment operation on the process audio.
In an optional embodiment, the audio editing page includes a sound source setting component;
the receiving module is further configured to receive a first audio adjustment operation on the sound source setting component, where the first audio adjustment operation is used to determine a tone color of the process audio.
In an optional embodiment, the receiving module is further configured to receive a selection operation on the audio source setting component;
the display module is further configured to display a sound source candidate item based on the selection operation, where the sound source candidate item is an item corresponding to a pre-stored tone color adjustment mode;
the receiving module is further configured to receive a selection operation on a target sound source candidate as the first audio adjustment operation.
In an optional embodiment, the receiving module is further configured to receive a selection operation on the audio setting component;
the display module is further used for displaying an audio recording page based on the selection operation, and the audio recording page is used for collecting sample audio through an audio input device;
the receiving module is further configured to receive a sound source generation operation as the first audio adjustment operation in response to the sample audio being recorded.
In an optional embodiment, the audio editing page comprises a music speed setting component;
the receiving module is further configured to receive a second audio adjustment operation on the music speed setting component, where the second audio adjustment operation is used to determine a music score playing speed of the process audio.
In an alternative embodiment, a pitch adjustment component is included in the audio editing page;
the receiving module is further configured to receive a third tonal modification operation on the pitch adjustment component, the third tonal modification operation being configured to determine a note block pitch of the process audio.
In an alternative embodiment, the receiving module is further configured to receive a selection operation on the pitch adjustment assembly;
the display module is further used for displaying a sound altitude line based on the selection operation, wherein the sound altitude line is an indication line which is generated according to the note block and used for expressing a pitch situation;
the receiving module is further configured to receive a drag adjustment operation on the pitch contour as the third pitch adjustment operation.
In an optional embodiment, a ventilation setting component is included in the audio editing page;
the receiving module is further used for receiving a fourth audio adjusting operation on the ventilation setting assembly, and the fourth audio adjusting operation is used for adding ventilation items to sound source sounding contents in the process audio.
In an optional embodiment, the receiving module is further configured to receive an audio import operation, where the audio import operation is configured to import a note block of a candidate audio in the pitch reference region, where the candidate audio is an audio that is stored or known to be at a retrieval address;
alternatively, the first and second electrodes may be,
the receiving module is further configured to receive a note block drawing operation in the pitch reference region, the note block drawing operation being used to create a note block corresponding to the pitch reference region.
In an optional embodiment, the receiving module is further configured to receive a first shortcut key operation; and canceling the last audio adjustment operation before the current moment based on the first shortcut key operation.
In an optional embodiment, the receiving module is further configured to receive a second shortcut key operation; and restoring the audio adjusting operation which is cancelled last time before the current time based on the second shortcut key operation.
In another aspect, an audio editing apparatus is provided, the apparatus including:
the terminal comprises a receiving module, a processing module and a processing module, wherein the receiving module is used for receiving audio data of a process audio, the process audio is an audio to be edited in an audio editing page of the terminal, and the audio editing page is a webpage displayed in a browser program of the terminal;
the receiving module is further configured to receive an audio adjustment signal, where the audio adjustment signal is a signal sent to a server when the terminal receives an audio adjustment operation on the audio editing page;
an adjustment module to adjust an audio parameter of the process audio based on the audio adjustment signal;
the receiving module is further configured to receive an audio generation signal, where the audio generation signal is used to instruct that a result audio is generated on the basis of the current process audio;
and the sending module is used for feeding back the result audio to the terminal based on the audio generation signal.
In another aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction is stored in the memory, and the instruction is loaded and executed by the processor to implement the audio editing method as provided in the embodiments of the present disclosure.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, the instruction being loaded and executed by a processor to implement the audio editing method as provided in the embodiments of the present disclosure above.
In another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the audio editing method described in any of the above embodiments.
The technical scheme provided by the embodiment of the disclosure has the beneficial effects that:
in the audio editing process, a user can directly set the note blocks on the webpage and adjust the audio parameters, the audio editing environment is flexible, the audio editing mode is convenient, the user does not need to additionally make MIDI files, the MIDI files are imported to achieve audio generation, the audio file generation efficiency and the flexible degree of generation are improved, and the user interaction experience is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a schematic illustration of an implementation environment provided by an exemplary embodiment of the present disclosure;
FIG. 2 is a flowchart of an audio editing method provided by an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an audio editing process provided based on the embodiment shown in FIG. 2;
FIG. 4 is a page schematic diagram of an audio editing page provided by an exemplary embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an editor component framework provided by an exemplary embodiment of the disclosure;
FIG. 6 is a flowchart of an audio editing method provided by another exemplary embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a crank speed setting assembly provided based on the embodiment shown in FIG. 6;
FIG. 8 is a schematic illustration of a pitch line provided based on the embodiment shown in FIG. 6;
fig. 9 is a schematic diagram of a state in the trial listening play process provided based on the embodiment shown in fig. 6;
FIG. 10 is a Vuex data architecture diagram provided by an exemplary embodiment of the present disclosure;
fig. 11 is a flowchart of an audio editing method provided by another exemplary embodiment of the present disclosure;
fig. 12 is a block diagram of an audio editing apparatus according to an exemplary embodiment of the present disclosure;
fig. 13 is a block diagram of an audio editing apparatus according to another exemplary embodiment of the present disclosure;
fig. 14 is a block diagram of a terminal according to an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
First, description is made on an implementation environment of an audio editing method according to an embodiment of the present disclosure.
The audio editing method related to the present disclosure may be implemented by a terminal, or may be implemented by the terminal and a server in cooperation. In this embodiment, a terminal and a server are cooperatively executed to implement an audio editing method as an example, and please refer to fig. 1 schematically showing an implementation environment provided by an exemplary embodiment of the present disclosure.
As shown in fig. 1, the implementation environment includes a terminal 110 and a server 120, wherein the terminal 110 and the server 120 are connected through a communication network 130.
The terminal 110 is used to open and display an audio editing page in an application having a web browsing function. The terminal 110 is further configured to import an initial audio in the audio editing page, or create the initial audio by means of note block drawing, where when the terminal 110 imports the initial audio in the audio editing page, the terminal 110 uploads the initial audio to the server 120 so that the server 120 subsequently performs adjustment of audio parameters on the basis of the initial audio; when the terminal 110 creates the initial audio by means of note block drawing, the terminal 110 sends the drawn note blocks to the server 120, so that the server 120 performs subsequent audio parameter adjustment according to the generated initial audio.
The terminal 110 may also use the initial audio as process audio, and adjust audio parameters of the process audio. That is, the terminal 110 receives the audio adjusting operation, so that the terminal 110 sends an audio adjusting signal to the server 120, and the server 120 adjusts the audio parameter of the currently generated process audio according to the audio adjusting signal.
When the terminal 110 finishes adjusting the process audio, an audio generating signal is sent to the server 120, and the server 120 generates a result audio according to the audio generating signal and feeds the result audio back to the terminal 110 for playing.
It should be noted that, in the above embodiments, the audio editing method implemented by the terminal and the server together is taken as an example for description, in some embodiments, the process of adjusting the audio parameter by the server according to the audio adjustment signal may also be implemented at the terminal side, that is, the terminal may also complete the audio editing process offline, and the main body for implementing the audio editing method is not limited in the embodiments of the present application.
The terminal comprises at least one of terminals such as a smart phone, a tablet computer, a portable laptop, a desktop computer, an intelligent sound box, intelligent wearable equipment and intelligent face recognition equipment, the server can be a physical server or a cloud server for providing cloud computing service, and the server can be realized as one server or a server cluster or distributed system formed by a plurality of servers. When the terminal and the server cooperatively implement the scheme provided by the embodiment of the present application, the terminal and the server may be directly or indirectly connected in a wired or wireless communication manner, which is not limited in the embodiment of the present application.
In the embodiment of the disclosure, an online Web audio editor is provided, which can open an audio editing webpage through a browser, so as to implement online real-time audio editing on the audio editing webpage. The audio creation is realized by creating notes, drawing the pitch of the notes, adjusting the factors of the notes, and adjusting parameters such as loudness, tension, sound source, speed of bending and the like.
Fig. 2 is a flowchart of an audio editing method according to an exemplary embodiment of the present disclosure, which is described by taking the method as an example for being applied to a terminal, and as shown in fig. 2, the method includes:
step 201, displaying an audio editing page, where the audio editing page is a web page displayed in a browser program.
The audio editing page includes a pitch reference region. Wherein the pitch reference area is used to provide a pitch reference for the note settings of the author when authoring audio.
In some embodiments, the browser program may be implemented as an independent program installed in the terminal, or may also be implemented as a sub program in any parent program installed in the terminal, or another application program installed in the terminal and capable of opening a web page link.
In some embodiments, the audio editing page is a page developed through the Vue frame, and the Vue frame is a progressive JavaScript frame.
The audio editing page provides audio editing functionality, which in some embodiments includes at least one of: creating notes, adjusting note pitch, adjusting note phonemes, adjusting loudness, adjusting tension, adjusting sound source, adjusting tempo, and the like.
Wherein, the process of creating the musical notes refers to adding or deleting the musical notes in the process audio, or adjusting the positions of the musical notes in the process audio, and the like; adjusting the pitch of a note refers to adjusting the pitch expressed by a note block, such as: c4 represents a pitch, representing the center C, which is the center tone on the piano key; adjusting the note phoneme means adjusting the lingering sound length of the note corresponding to the phoneme block; adjusting loudness refers to adjusting the magnitude of process audio sounds; the tension adjustment means adjusting the tension and the relaxation degree of the audio sound expression in the process; adjusting the sound source refers to adjusting the sound tone of the process audio, such as: the sound source comprises a cartoon figure sound source, a real person recorded sound source, a movie and television play role sound source and the like, which are not limited by the embodiment of the disclosure; adjusting the tempo refers to adjusting the playing speed of the process audio.
In some embodiments, when the Audio adjustment is performed on the process Audio, the Audio adjustment is performed through a Web Audio API, where the Web Audio API is a system for controlling the Audio on the Web, and may perform various operations corresponding to the Audio.
In step 202, the note blocks of the process audio are displayed in the audio reference area.
The note blocks are used to constitute process audio, which is audio produced in a pitch reference region. In some embodiments, the process audio is audio produced by implementation in a pitch reference region.
In some embodiments, displaying the note blocks of the process audio in the pitch reference region includes at least one of:
first, an audio import operation is received, the audio import operation for importing, in a pitch reference region, note blocks of candidate audio, the candidate audio being audio that is stored or known to have an address retrieved.
Namely, the candidate audio imported by the audio import operation is the audio stored locally in the terminal; or, the candidate audio imported by the audio import operation is the audio corresponding to the resource acquisition link, such as: and inputting an audio resource address in the audio import column.
And importing the candidate audio into a pitch reference area, wherein the candidate audio is the audio to be adjusted currently, and taking the candidate audio as the process audio to perform subsequent audio parameter adjustment.
Second, a block rendering operation in a pitch reference region is received.
The note block drawing operation is for creating a note block corresponding to a pitch reference region, the note block being for constituting the process audio, the process audio being the audio produced in said pitch reference region.
In some embodiments, the manner of receiving the tile rendering operation includes at least one of:
firstly, a sound block drawing component is included on an audio editing page, the operation needing to be executed currently is determined to be drawing a note block through clicking operation on the sound block drawing component, and when the specified operation in a note drawing area corresponding to a pitch reference area is received, the note block is set according to the pitch and the melody position specified by the specified operation;
secondly, receiving a function designation operation in a note drawing area corresponding to a pitch reference area and displaying candidate functions including a note block drawing function, and setting note blocks at a designated pitch and melody position when a selection operation of the note block drawing function is received;
and thirdly, the audio editing page comprises a sound block drawing component, a candidate note block is generated at a default position when a clicking operation on the sound block drawing component is received, a dragging operation on the candidate note block is received, and the candidate note block is dragged to a specified pitch and a specified melody position so as to generate the obtained note block.
It should be noted that the above-mentioned manner of drawing the note blocks is only an illustrative example, and the embodiment of the disclosure does not limit this.
In some embodiments, the pitch reference area is displayed vertically on the audio editing page, optionally displaying the pitch vertically from high to low.
The process audio, which is generated according to the note drawing operation or other editing operation, i.e., the audio that has not been derived as a result, can be constructed by concatenating the drawn notes according to the melodic settings. The user can listen to the currently generated process audio on the basis of the audio editing page, and can also generate final result audio based on the process audio.
Step 203, receiving an audio adjustment operation on the audio editing page.
The audio adjusting operation is used to adjust audio parameters of the process audio, where the audio parameters include at least one of a sound source, a pitch, a phoneme, a loudness, a tension, and a speed of curvature, and the audio parameters are merely illustrative examples, and the type of the audio parameters is not limited by the embodiments of the present disclosure.
In some embodiments, the audio editing page includes components corresponding to the audio parameters, and the audio parameters are adjusted by the components or in other manners.
Fig. 3 is a schematic diagram of an audio editing process provided by an exemplary embodiment of the present disclosure, as shown in fig. 3, in the process, after editing of audio is started, the process includes the following steps: step 301, adjusting the speed of music, the beat and the sound source. That is, the playing speed and the beat condition of the process audio are set, and the sounding tone corresponding to the process audio is set. Step 302, draw sound blocks, i.e., perform a sound block drawing operation to draw note blocks that are used to form the process audio. Step 303, drawing a sound altitude. I.e. the pitch line of the process audio is adjusted or drawn. Step 304, adjust the phoneme. The adjusting of the phoneme includes at least one of adjusting a pronunciation of the phoneme and adjusting a lingering length of the phoneme. Step 305, adjust loudness. The adjusting of loudness includes at least one of adjusting loudness of the sounding of the sound source, adjusting loudness of the accompaniment, and adjusting overall loudness. Step 306, adjust the tension. Namely, the degree of relaxation and tension of the sound source sound in the process audio is adjusted. Step 307, adjust the accompaniment track. The adjustment of the accompaniment track comprises adjustment of the degree of matching between the accompaniment track and the sound source sound production or adjustment of the accompaniment length of the accompaniment track.
At step 204, a result audio is generated based on the audio adjustment operation on the process audio.
Referring to fig. 3, after the adjustment process is completed, the method further includes a step 308 of clicking the play control to listen. I.e. the process audio currently generated is audited. Or step 309, generating audio. Namely, the audio is generated on the basis of the current process audio to obtain the result audio.
It should be noted that the above process is described by taking the terminal side as an example, when the terminal and the server side implement the audio editing method together, the server receives audio data of a process audio, the process audio is an audio to be edited in an audio editing page of the terminal, and the audio editing page is a web page displayed in a browser program of the terminal; the server receives an audio adjusting signal sent by the terminal, wherein the audio adjusting signal is a signal sent to the server when the terminal receives audio adjusting operation on an audio editing page; adjusting an audio parameter of the process audio based on the audio adjustment signal; receiving an audio generation signal, wherein the audio generation signal is used for indicating that result audio is generated on the basis of the current process audio; the server feeds back the resulting audio to the terminal based on the audio generation signal.
In summary, in the method provided by this embodiment, in the audio editing process, a user can directly set the note blocks on the webpage and adjust the audio parameters, the audio editing environment is flexible, the audio editing mode is convenient, the user does not need to additionally make MIDI files, and import the MIDI files to generate the audio, so that the generation efficiency and the flexible degree of the audio files are improved, and the user interaction experience is improved.
Schematically, fig. 4 is a page schematic diagram of an audio editing page provided by an exemplary embodiment of the present disclosure, as shown in fig. 4, the audio editing page 400 includes a pitch reference area 410, a note drawing area 420, a setting panel 430, a track display area 440, and a function area 450.
Wherein the pitch reference area 410 is disposed corresponding to the note drawing area 420. The pitch reference area 410 displays various pitch parameters, such as: "F4, E4, D4, C4, etc.". The pitch parameter in the pitch reference area 410 is displayed vertically. The note-drawing area 420 is used to display a note block corresponding to the pitch reference area 410.
The note-drawing area 420 also displays tempo information, 4/4 shown in fig. 4, that is, 4/4 beats represents the tempo of the current course audio. The note composition process audio is plotted in the note plotting area 420 corresponding to the tempo information and the pitch reference area 410.
The setting panel 430 is used for setting parameters such as name, sound source, speed of music, etc. corresponding to the audio. The sound source may be selected from a sound source library, or may be recorded by a user, and the tempo may be obtained by directly editing a number, or may be selected from candidate tempos, which is not limited in this disclosure.
The track display area 440 is used for displaying a stem track 441 and an accompaniment track 442, wherein the stem track 441 is a track generated by a sound source corresponding to the sound except the accompaniment; the accompaniment tracks 442 are tracks corresponding to audio for accompanying the sound source utterance. The track display region 440 can adjust the loudness of the stem track 441 and the loudness of the accompaniment tracks 442.
The functional area 450 includes a music score introduction control 451, a composition control 452, a mode switching control 453, a loudness control 454, a tension control 455, a play control 456, and an audio generation control 457.
The music score importing control 451 is used for importing the edited music score into the current process audio editing process. The composition control 452 is used to jump to the composition process. The mode switching control 453 is configured to switch between at least two editing modes, and illustratively, the mode switching control 453 is configured to switch between a note mode, i.e., a mode in which a drawn note block is displayed in the note drawing region 420, a pitch line mode, i.e., a mode in which a pitch line generated from the drawn note block is displayed in the note drawing region 420, and a phoneme mode, i.e., a mode in which phoneme information is displayed in the note drawing region 420. The loudness control 454 is used to adjust the playback loudness of the current process audio. The tension control 455 is used to adjust the source voicing tension in the current process audio. The play control 456 is used to control the play trial listening or pause the play of the currently edited process audio. Audio generation control 457 is used to generate result audio based on the currently edited process audio.
The interface shown in FIG. 4 above is for illustrative purposes only, and in some embodiments, more or fewer components may be included in the audio editing page. Fig. 5 is a schematic diagram of an editor component framework provided in an exemplary embodiment of the disclosure, and as shown in fig. 5, the audio editor 500 includes the following parts.
Track area 510: including audio titles, beats, tracks, stage (dry track), stage background, accompaniment tracks.
Head region 520: including importing music score popup, normal session.
Beat region 530: including the beat, beat popup, and beat display area.
Main region 540: including a piano zone (i.e., pitch reference zone), a stage background zone, a play line, a stage (including arrows, ventilation arrows), a menu list, a pitch line pattern, a phoneme pattern, a tension pattern, a stage menu list, a lyric setting, a song correction setting.
The piano keys in the piano area are realized by Web technology, piano music is determined in a Web page, a C4 Audio file is determined, and then Web Audio API is used for converting Audio into Audio sound with corresponding pitch through a formula, wherein the Audio sound corresponds to each piano key.
Control panel area 550。
With reference to fig. 4 and fig. 5, fig. 6 is a flowchart of an audio editing method provided by another exemplary embodiment of the present disclosure, which is described by taking as an example that the method is applied in a terminal, and as shown in fig. 6, the method includes:
step 601, displaying an audio editing page, wherein the audio editing page is a webpage displayed in a browser program.
The audio editing page includes a pitch reference region. Wherein the pitch reference area is used to provide a pitch reference for the note settings of the author when authoring audio.
In some embodiments, the browser program may be implemented as an independent program installed in the terminal, or may also be implemented as a sub program in any parent program installed in the terminal, or another application program installed in the terminal and capable of opening a web page link.
In step 602, a note block of the process audio is displayed in a pitch reference region.
The note blocks are used to constitute process audio, which is audio produced in a pitch reference region.
In some embodiments, displaying the note blocks of the process audio in the pitch reference region includes at least one of:
first, an audio import operation is received, the audio import operation for importing, in a pitch reference region, note blocks of candidate audio, the candidate audio being audio that is stored or known to have an address retrieved.
Second, a block rendering operation in a pitch reference region is received.
Step 603, receive the first audio adjustment operation on the audio source setting component.
The audio editing page comprises a sound source setting component, and the first audio adjusting operation is used for determining the tone of the process audio.
In some embodiments, the implementation of the first audio adjustment operation includes at least one of:
firstly, receiving a selection operation on a sound source setting component, displaying a sound source candidate item based on the selection operation, wherein the sound source candidate item is an adjustment option corresponding to a pre-stored tone adjustment mode, and receiving the selection operation on a target sound source candidate item as a first audio adjustment operation.
Illustratively, when receiving a selection operation on the sound source setting component, displaying a candidate sound source list, where the candidate sound source list includes sound source candidates, including: A. a doll sound; B. a star x; C. the cartoon character y. And when the selection operation of the option A is received, the doll sound is used as the sound effect of the sound source sounding in the current process audio.
And secondly, receiving a selection operation on the audio setting component, displaying an audio recording page based on the selection operation, wherein the audio recording page is used for acquiring sample audio through audio input equipment, and receiving a sound source generation operation as a first audio adjusting operation in response to the completion of the sample audio recording.
Step 604, a second audio adjustment operation on the tempo setting component is received.
The audio editing page comprises a music speed setting component, and the second audio adjusting operation is used for determining the music score playing speed of the process audio.
In some embodiments, the curvature speed setting component is used for unfolding the pop-up window so as to set the curvature speed value; alternatively, the speed setting component comprises an increase control and a decrease control, the speed is increased through the triggering operation on the increase control, and the speed is decreased through the triggering operation on the decrease control.
Referring to fig. 7, a schematic diagram of a curve speed setting component according to an exemplary embodiment of the present disclosure is shown, as shown in fig. 7, the curve speed setting component 700 includes an increase control 710, a decrease control 720, and a curve speed value display area 730; the speed of music numerical value of the current process audio is displayed in the speed of music numerical value display area 730, and when the selection operation on the increase control 710 is received, the speed of music numerical value in the speed of music display area 730 is increased by a preset step length; conversely, when a selection operation on the decrease control 720 is received, the tempo value in the tempo exhibition area 730 is decreased by a preset step size.
Step 605, a third tone tuning operation on the pitch tuning component is received.
A pitch adjustment component is included in the audio editing page, and a third tone adjustment operation is used to determine the pitch of the note blocks of the process audio.
In some embodiments, a selection operation on the pitch adjustment component is received, a pitch contour is displayed based on the selection operation, the pitch contour being an indicator line generated from a note block for expressing a pitch case, and a drag adjustment operation on the pitch contour is received as a third pitch adjustment operation.
Optionally, the pitch adjustment component is implemented as the pitch line mode component, that is, the user selects the mode switching control in the current audio editing page and selects the pitch line mode therein, so that the pitch line corresponding to the current process audio is displayed in the audio editing page.
In some embodiments, the pitch contour corresponding to the process audio is predicted by artificial intelligence. Schematically, the information of the current note block in the process audio is input into a neural network model obtained through pre-training, and then a pitch contour corresponding to the process audio is output.
And displaying the predicted pitch line in a dotted line form in the audio editing page, performing pitch adjustment on the basis of the predicted pitch line by the user, and displaying the pitch line adjusted by the user in a solid line form.
Schematically, fig. 8 is a pitch line schematic diagram provided by an exemplary embodiment of the present disclosure, as shown in fig. 8, a note block 810 and a reference pitch line 820 automatically generated by the corresponding note block are displayed in an audio editing page 800, and a user may drag the reference pitch line 820, so as to obtain an adjusted pitch line 830 as a pitch line of the process audio.
Or, the user can directly draw the pitch line in the audio editing page to obtain the pitch line corresponding to the process audio. The pitch line is mainly realized by Scalable Vector Graphics (SVG), all points are connected through a path attribute in the SVG, then data of the points are changed through drawing of a mouse, and then new data points are connected to finally form a new pitch line.
Step 606, receive a fourth audio adjustment operation on the ventilation settings component.
The audio editing page comprises a ventilation setting component, and the fourth audio adjusting operation is used for adding ventilation items to sound source sounding contents in the process audio.
In some embodiments, the method is used for adding ventilation events between two adjacent phonemes and simulating ventilation events in the sounding process.
Step 607, generating a result audio based on the audio adjustment operation on the process audio.
Namely, the audio is generated on the basis of the current process audio to obtain the result audio.
In some embodiments, after the process audio is audited first, when the audition result meets the user requirement, the result audio is generated.
The audio editing webpage comprises a play control, and process audio is auditioned and played through selection operation of the play control. Referring to fig. 9, the audition playing process mainly includes the following states: initial state 910, play state 920, pause state 930, end state 940.
The initial state 910 can be switched to the playing state 920, the playing state 920 can be switched to the pause state 930, and the playing state 920 can be switched to the ending state 940.
Firstly, in an initial state 910, when a user clicks a play control, firstly, judging whether an audio file needs to be synthesized again, and synthesizing and playing process audio when the audio file needs to be synthesized again; when the audio file does not need to be synthesized again, whether a playable link exists is judged, and if the playable audio exists, direct playing is not synthesized; if no playable audio exists, synthesizing and playing the process audio. Switch to the play state 920.
In the play state 920, when the play control is clicked again, the state is switched to the pause state 930.
In the pause state 930, when the play control is clicked, it is determined whether re-composition is required, and if so, the composition is performed and played, and if not, the playback is performed directly or continues to be performed. Switch to the play state 920.
In the ending state 940, when the play control is clicked, it is determined whether re-composition is required, and if so, the composition is performed and played, and if not, the composition is directly played or the playing is continued. Switch to the play state 920.
In summary, in the method provided by this embodiment, in the audio editing process, a user can directly set the note blocks on the webpage and adjust the audio parameters, the audio editing environment is flexible, the audio editing mode is convenient, the user does not need to additionally make MIDI files, and import the MIDI files to generate the audio, so that the generation efficiency and the flexible degree of the audio files are improved, and the user interaction experience is improved.
The method provided by the embodiment can adjust the aspects of the sound height line, the loudness, the tension, the bending speed, the beat, the sound source and the like besides creating the note blocks, so that the efficiency and the accuracy of audio editing are improved.
In some embodiments, the audio editor provided by the embodiments of the present disclosure relies on Vue state management mode (Vuex) for state management and realizes data transition in the audio editor, where the data architecture of Vuex is as shown in fig. 10.
Referring to fig. 10, the editor basic element 1010 includes: minimum width of 32 notes, such as: 20; minimum height of 32 notes, such as: 25; beats, such as: 4/4 beats; a position of a line; the number of sections of the editor; the speed of the curve; a sound source; and (4) stage related content.
The data architecture of Vuex also includes stage sound blocks 1020, loudness raw data 1030, tension raw data 1040, pitch lines 1050, operation flags 1060, mode switches 1070, and the like.
The pitch line 1050 relates to a pitch line synthesized by Artificial Intelligence (AI), a pitch line edited by a user, local editing content of the pitch line, and the like; the operation flags 1060 relate to a change of a stage sound block, a change of a sound height line, a change of loudness, a change of tension, a change of a vowel, and the like.
In some embodiments, the withdrawal of the operation may also be performed by a shortcut key. Fig. 11 is a flowchart of an audio editing method according to another exemplary embodiment of the present disclosure, which is described by taking as an example that the method is applied to a terminal, and as shown in fig. 11, the method includes:
step 1101, displaying an audio editing page, wherein the audio editing page is a webpage displayed in the browser program.
The audio editing page includes a pitch reference region. Wherein the pitch reference area is used to provide a pitch reference for the note settings of the author when authoring audio.
In some embodiments, the browser program may be implemented as an independent program installed in the terminal, or may also be implemented as a sub program in any parent program installed in the terminal, or another application program installed in the terminal and capable of opening a web page link.
Step 1102 displays a note block of the process audio in a pitch reference region.
The note blocks are used to constitute process audio, which is audio produced in a pitch reference region.
In some embodiments, displaying the note blocks of the process audio in the pitch reference region includes at least one of:
first, an audio import operation is received, the audio import operation for importing, in a pitch reference region, note blocks of candidate audio, the candidate audio being audio that is stored or known to have an address retrieved.
Second, a block rendering operation in a pitch reference region is received.
Step 1103, receiving an audio adjustment operation on the audio editing page.
The audio adjusting operation is used to adjust audio parameters of the process audio, where the audio parameters include at least one of a sound source, a pitch, a phoneme, a loudness, a tension, and a speed of curvature, and the audio parameters are merely illustrative examples, and the type of the audio parameters is not limited by the embodiments of the present disclosure.
In some embodiments, the audio editing page includes components corresponding to the audio parameters, and the audio parameters are adjusted by the components or in other manners.
And step 1104, receiving a first shortcut key operation.
The first shortcut key operation is a preset shortcut key operation corresponding to the operation. Illustratively, the first shortcut key operation is an operation of entering Ctrl + z through the keyboard.
In step 1105, the last audio adjustment operation before the current time is cancelled based on the first shortcut key operation.
The first shortcut key operation is a preset and stored shortcut key operation corresponding to the revocation function.
In some embodiments, the audio editor also corresponds to a pull stack, and each time an operation is placed in the pull stack, when a pull is needed, the last operation is taken out of the pull stack and executed to implement the pull function.
Step 1106, receiving a second shortcut key operation.
The second shortcut key operation is a preset shortcut key operation corresponding to the operation. Illustratively, the second shortcut key operation is an operation of entering Ctrl + y through the keyboard.
Step 1107, the audio adjustment operation that was cancelled last time before the current time is resumed based on the second shortcut key operation.
The second shortcut key operation is a preset and stored shortcut key operation corresponding to the forward function.
In some embodiments, the audio editor corresponds to a pull stack, when a pull is needed, the last operation is taken out from the pull stack and executed to implement the pull function, meanwhile, the operation taken out from the pull stack is put into the pull stack, when the pull function needs to be executed, the last operation in the pull stack is taken out and executed, the operation is put into the pull stack, and then the pull is needed next time, and the execution can continue in the pull stack.
Step 1108, generate a result audio based on the audio adjustment operation on the process audio.
Namely, the audio is generated on the basis of the current process audio to obtain the result audio.
In some embodiments, after the process audio is audited first, when the audition result meets the user requirement, the result audio is generated.
In summary, in the method provided by this embodiment, in the audio editing process, a user can directly set the note blocks on the webpage and adjust the audio parameters, the audio editing environment is flexible, the audio editing mode is convenient, the user does not need to additionally make MIDI files, and import the MIDI files to generate the audio, so that the generation efficiency and the flexible degree of the audio files are improved, and the user interaction experience is improved.
According to the method provided by the embodiment, the voice frequency adjustment operation can be withdrawn through the withdrawal shortcut key when the user performs the misoperation by providing the withdrawal shortcut key to provide the withdrawal function, and the withdrawn voice frequency adjustment operation can be recovered through the advance shortcut key after the user performs the misoperation, so that the efficiency of voice frequency editing and the human-computer interaction efficiency are improved.
Fig. 12 is a block diagram of an audio editing apparatus according to an exemplary embodiment of the present application, where, as shown in fig. 12, the apparatus includes:
a display module 1210, configured to display an audio editing page, where the audio editing page is a web page displayed in a browser program, and the audio editing page includes a pitch reference area;
a display module 1210 further for displaying in the pitch reference region a note block of process audio, the note block for constituting the process audio, the process audio being audio produced in the pitch reference region;
the receiving module 1220 is further configured to receive an audio adjusting operation on the audio editing page, where the audio adjusting operation is used to adjust an audio parameter of the process audio;
a generating module 1230 configured to generate a result audio based on the audio adjustment operation on the process audio.
In an optional embodiment, the audio editing page includes a sound source setting component;
the receiving module 1220 is further configured to receive a first audio adjusting operation on the sound source setting component, where the first audio adjusting operation is used to determine a tone color of the process audio.
In an alternative embodiment, the receiving module 1220 is further configured to receive a selection operation on the audio source setting component;
the display module 1210 is further configured to display a sound source candidate item based on the selection operation, where the sound source candidate item is an option corresponding to a pre-stored tone color adjustment manner;
the receiving module 1220 is further configured to receive a selection operation on a target audio source candidate as the first audio adjusting operation.
In an optional embodiment, the receiving module 1220 is further configured to receive a selection operation on the audio setting component;
the display module 1210 is further configured to display an audio recording page based on the selection operation, where the audio recording page is used to collect sample audio through an audio input device;
the receiving module 1220 is further configured to receive, in response to that the sample audio is recorded, a sound source generating operation as the first audio adjusting operation.
In an optional embodiment, the audio editing page comprises a music speed setting component;
the receiving module 1220 is further configured to receive a second audio adjusting operation on the music speed setting component, where the second audio adjusting operation is used to determine a music score playing speed of the process audio.
In an alternative embodiment, a pitch adjustment component is included in the audio editing page;
the receiving module 1220 is further configured to receive a third tonal modification operation on the pitch modification component, where the third tonal modification operation is used to determine a note block pitch of the process audio.
In an alternative embodiment, the receiving module 1220 is further configured to receive a selection operation on the pitch adjustment assembly;
the display module 1210 is further configured to display a pitch contour based on the selection operation, where the pitch contour is an indication line generated according to the note block and used for expressing a pitch situation;
the receiving module 1220, configured to receive a drag adjustment operation on the pitch contour as the third pitch adjustment operation.
In an optional embodiment, a ventilation setting component is included in the audio editing page;
the receiving module 1220 is further configured to receive a fourth audio adjusting operation on the ventilation setting component, where the fourth audio adjusting operation is used to add ventilation items to the sound source sounding content in the process audio.
In an optional embodiment, the receiving module 1220 is further configured to receive an audio import operation, where the audio import operation is configured to import a note block of a candidate audio in the pitch reference region, where the candidate audio is an audio that is stored or known to be at a retrieval address;
alternatively, the first and second electrodes may be,
the receiving module 1220 is further configured to receive a note block drawing operation in the pitch reference region, the note block drawing operation being used to create a note block corresponding to the pitch reference region.
In an optional embodiment, the receiving module 1220 is further configured to receive a first shortcut key operation; and canceling the last audio adjustment operation before the current moment based on the first shortcut key operation.
In an optional embodiment, the receiving module 1220 is further configured to receive a second shortcut key operation; and restoring the audio adjusting operation which is cancelled last time before the current time based on the second shortcut key operation.
Fig. 13 is a block diagram of an audio editing apparatus according to another exemplary embodiment of the present disclosure, as shown in fig. 13, the apparatus includes:
a receiving module 1310, configured to receive audio data of a process audio, where the process audio is an audio to be edited in an audio editing page of the terminal, and the audio editing page is a web page displayed in a browser program of the terminal;
the receiving module 1310 is further configured to receive an audio adjusting signal, where the audio adjusting signal is a signal sent to a server when the terminal receives an audio adjusting operation on the audio editing page;
an adjustment module 1320, configured to adjust an audio parameter of the process audio based on the audio adjustment signal;
the receiving module 1310 is further configured to receive an audio generating signal, where the audio generating signal is used to instruct to generate result audio based on the current process audio;
a sending module 1330, configured to feed back the resulting audio to the terminal based on the audio generating signal.
In summary, in the device provided in this embodiment, in the audio editing process, the user can directly draw the note blocks on the webpage and adjust the audio parameters, the audio editing environment is flexible, the audio editing mode is convenient, the user does not need to additionally make MIDI files, and the MIDI files are imported to generate the audio, so that the generation efficiency and the flexible degree of the audio files are improved, and the user interaction experience is improved.
It should be noted that: the audio editing apparatus provided in the foregoing embodiment is only illustrated by dividing the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the audio editing apparatus and the audio editing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
FIG. 14 is a block diagram illustrating a computer device 1400 in accordance with an example embodiment. For example, the computer device 1400 may be the terminal introduced above. For example, the terminal may be an electronic device such as a mobile phone, a tablet Computer, an e-book reader, a multimedia playing device, a Personal Computer (PC), and a wearable device.
Referring to fig. 14, computer device 1400 may include one or more of the following components: a processing component 1402, a memory 1404, a power component 1406, a multimedia component 1408, an audio component 1410, an Input/Output (I/O) interface 1412, a sensor component 1414, and a communication component 1416.
The processing component 1402 generally controls the overall operation of the computer device 1400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing component 1402 may include one or more processors 1420 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 1402 can include one or more modules that facilitate interaction between processing component 1402 and other components. For example, the processing component 1402 can include a multimedia module to facilitate interaction between the multimedia component 1408 and the processing component 1402.
The memory 1404 is configured to store various types of data to support the operation at the computer device 1400. Examples of such data include instructions for any application or method operating on computer device 1400, contact data, phonebook data, messages, pictures, videos, and so forth. The Memory 1404 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random-Access Memory (SRAM), Electrically Erasable Programmable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk.
The power supply component 1406 provides power to the various components of the computer device 1400. The power components 1406 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the computer device 1400.
The multimedia component 1408 comprises a screen providing an output interface between the computer device 1400 and a user. In some embodiments, the screen may include an Organic Light-Emitting Diode (OLED) display screen and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1408 includes a front-facing camera and/or a rear-facing camera. The front camera and/or the rear camera may receive external multimedia data when the computer device 1400 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1410 is configured to output and/or input audio signals. For example, audio component 1410 includes a Microphone (MIC) configured to receive external audio signals when computer device 1400 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1404 or transmitted via the communication component 1416. In some embodiments, audio component 1410 further includes a speaker for outputting audio signals.
I/O interface 1412 provides an interface between processing component 1402 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 1414 includes one or more sensors for providing various aspects of state assessment for the computer device 1400. For example, the sensor component 1414 can detect an open/closed state of the computer device 1400, the relative positioning of components, such as a display and a keypad of the computer device 1400, the sensor component 1414 can also detect a change in the position of the computer device 1400 or a component of the computer device 1400, the presence or absence of user contact with the computer device 1400, orientation or acceleration/deceleration of the computer device 1400, and a change in the temperature of the computer device 1400. The sensor assembly 1414 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 1414 may also include a photosensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge-coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1416 is configured to facilitate wired or wireless communication between the computer device 1400 and other devices. Computer device 1400 may access a wireless network based on a communication standard, such as Wi-Fi, 2G, or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the Communication component 1416 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, BlueTooth (BlueTooth, BT) technology, and other technologies.
In an exemplary embodiment, the computer Device 1400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-described audio editing methods.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, on which a computer program is stored, which, when executed by a processor of the computer device 1400, enables the computer device 1400 to implement the above-described audio editing method. For example, the non-transitory computer-readable storage medium may be a ROM, a Random-Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The embodiment of the present disclosure further provides a computer device, which includes a memory and a processor, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded by the processor and implements the audio editing method.
The disclosed embodiments also provide a computer-readable storage medium, in which at least one instruction, at least one program, a code set, or an instruction set is stored, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the above audio editing method.
The present disclosure also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the audio editing method described in any of the above embodiments.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:一种信息处理方法、装置、设备及存储介质