Network and speed enhancement for distributing unified images over computer networks

文档序号:7582 发布日期:2021-09-17 浏览:41次 中文

1. A computer-implemented method for determining which image of a plurality of images to present in a search results page for a product, comprising:

receiving the plurality of images of a plurality of items associated with the product;

performing image ranking of the plurality of images to identify a first image of the plurality of images of the product based at least in part on the user interaction metric for each image of the plurality of images;

receiving a search query from a user device that maps to the product; and

sending the search results page including at least one item of the plurality of items and the first image to the user device based at least in part on a user interaction metric of the first image.

2. The method of claim 1, wherein transmitting the search results page comprises:

sending the search results page including the first image to the user device based at least in part on a network bandwidth measurement satisfying a bandwidth threshold.

3. The method of claim 1, further comprising:

monitoring user interaction with the plurality of images via one or more search results pages to generate a user interaction metric for each image of the plurality of images, wherein ranking the plurality of images is based at least in part on applying a machine learning model to the user interaction metric.

4. The method of claim 3, wherein monitoring user interaction with the plurality of images comprises:

monitoring, via one or more search results pages, for generating the user interaction metric: a user viewing time of one or more of the plurality of images, a user image zoom indicator of one or more of the plurality of images, an item sale price for each item in a subset of at least one sale of the plurality of items, or any combination thereof.

5. The method of claim 3, further comprising:

monitoring, via one or more search results pages, user interaction with the first image to generate an updated user interaction metric for the first image;

ranking the plurality of images to identify a second image of the plurality of images of the product based at least in part on the updated user interaction metrics;

receiving a second search query from the user device or a second user device that maps to the product; and

sending a second search results page including the second image to the user device or the second user device, the second image being the same as the first image or different from the first image.

6. The method of claim 1, further comprising:

receiving a list of the at least one of the plurality of items and one or more of the plurality of images associated with the list, wherein the first image is different from each of the one or more images associated with the list.

7. The method of claim 1, further comprising:

receiving a list of the at least one of the plurality of items, wherein the list does not have any images in the plurality of images associated with the list.

8. The method of claim 1, wherein transmitting the search results page comprises:

sending the search results page to the user device, the search results page including a first image of a listing associated with a first item of the plurality of items and a first image of a listing associated with a second item of the plurality of items.

9. The method of claim 1, further comprising:

performing image classification on the plurality of images based at least in part on extracting one or more image features from the plurality of images;

generating a confidence match score for each item of the plurality of items based at least in part on the image features; and

mapping the plurality of items to the product based at least in part on the confidence match scores.

10. A system for determining which image of a plurality of images to present in a search results page for a product, comprising:

a memory device for storing instructions;

a processor that, when executing the instructions, causes the system to perform operations comprising:

receiving the plurality of images of a plurality of items associated with the product;

performing image ranking of the plurality of images to identify a first image of the plurality of images of the product based at least in part on the user interaction metric for each image of the plurality of images;

receiving a search query from a user device that maps to the product; and

sending the search results page including at least one item of the plurality of items and the first image to the user device based at least in part on a user interaction metric of the first image.

11. The system of claim 10, wherein transmitting the search results page comprises:

sending the search results page including the first image to the user device based at least in part on a network bandwidth measurement satisfying a bandwidth threshold.

12. The system of claim 10, wherein the processor, when executing the instructions, causes the system to perform operations comprising:

monitoring user interaction with the plurality of images via one or more search results pages to generate a user interaction metric for each image of the plurality of images, wherein ranking the plurality of images is based at least in part on applying a machine learning model to the user interaction metric.

13. The system of claim 12, wherein monitoring user interaction with the plurality of images comprises:

monitoring, via one or more search results pages, for generating the user interaction metric: a user viewing time of one or more of the plurality of images, a user image zoom indicator of one or more of the plurality of images, an item sale price for each item in a subset of at least one sale of the plurality of items, or any combination thereof.

14. The system of claim 12, wherein the processors, when executing the instructions, cause the system to perform operations comprising:

monitoring, via one or more search results pages, user interaction with the first image to generate an updated user interaction metric for the first image;

ranking the plurality of images to identify a second image of the plurality of images of the product based at least in part on the updated user interaction metrics;

receiving a second search query from the user device or a second user device that maps to the product; and

sending a second search results page including the second image to the user device or the second user device, the second image being the same as the first image or different from the first image.

15. The system of claim 10, wherein the processor, when executing the instructions, causes the system to perform operations comprising:

receiving a list of the at least one of the plurality of items, wherein the list does not have any images in the plurality of images associated with the list.

16. The system of claim 10, wherein transmitting the search results page comprises:

sending the search results page to the user device, the search results page including a first image of a listing associated with a first item of the plurality of items and a first image of a listing associated with a second item of the plurality of items.

17. The system of claim 10, wherein the processor, when executing the instructions, causes the system to perform operations comprising:

receiving a list of the at least one of the plurality of items and one or more of the plurality of images associated with the list, wherein the first image is different from each of the one or more images associated with the list.

18. The system of claim 10, wherein the processor, when executing the instructions, causes the system to perform operations comprising:

performing image classification on the plurality of images based at least in part on extracting one or more image features from the plurality of images;

generating a confidence match score for each item of the plurality of items based at least in part on the image features; and

mapping the plurality of items to the product based at least in part on the confidence match scores.

19. A non-transitory computer-readable medium comprising instructions that, when read by a machine, cause the machine to perform operations for determining which image of a plurality of images to present in a search results page for a product, the operations comprising:

receiving the plurality of images of a plurality of items associated with the product;

performing image ranking of the plurality of images to identify a first image of the plurality of images of the product based at least in part on the user interaction metric for each image of the plurality of images;

receiving a search query from a user device that maps to the product; and

sending the search results page including at least one item of the plurality of items and the first image to the user device based at least in part on a user interaction metric of the first image.

20. The non-transitory medium of claim 19, wherein transmitting the search results page comprises:

sending the search results page including the first image to the user device based at least in part on a network bandwidth measurement satisfying a bandwidth threshold.

Background

Computer networks allow data to be transferred between interconnected computers. Search engine technology allows users to obtain information from a wide array of available sources via a computer network. The search engine may be a program that searches a database and identifies content corresponding to a keyword or character input by a user, and may be based on searching for a website available via the internet. To generate a search, a user may interact with a user device, such as a computer or mobile phone, to submit a search query via a search engine. The search engine may perform a search based on communications with other applications and servers and display results of the search query. In some cases, network bandwidth may be limited and the ability of the network to return search results may be affected by the amount of traffic currently being transmitted over the network. There is a need for techniques to efficiently utilize computer network resources.

Disclosure of Invention

A method of determining which image of a set of images to present in a search results page for a product is described. The method can comprise the following steps: receiving a set of images of a set of items associated with a product; performing image ranking that ranks the set of images to identify a first image in the set of images of the product based on the user interaction metric for each image of the set of images; receiving a search query from a user device that maps to a product; and transmitting a search results page including at least one item of the set of items and the first image to the user device based on the user interaction metric for the first image.

An apparatus for determining which image of a set of images to present in a search results page for a product is described. The apparatus may include a processor, a memory coupled to the processor, and instructions stored in the memory. The instructions are executable by the processor to cause the apparatus to: receiving a set of images of a set of items associated with a product; performing image ranking that ranks the set of images to identify a first image in the set of images of the product based on the user interaction metric for each image of the set of images; receiving a search query from a user device that maps to a product; and transmitting a search results page including at least one item of the set of items and the first image to the user device based on the user interaction metric for the first image.

Another apparatus for determining which image of a set of images to present in a search results page for a product is described. The apparatus may comprise means for performing the steps of: receiving a set of images of a set of items associated with a product; performing image ranking that ranks the set of images to identify a first image in the set of images of the product based on the user interaction metric for each image of the set of images; receiving a search query from a user device that maps to a product; and transmitting a search results page including at least one item of the set of items and the first image to the user device based on the user interaction metric for the first image.

A non-transitory computer-readable medium storing code for determining which image of a set of images to present in a search results page for a product is described. The code may include instructions executable by a processor to perform the steps of: receiving a set of images of a set of items associated with a product; performing image ranking that ranks the set of images to identify a first image in the set of images of the product based on the user interaction metric for each image of the set of images; receiving a search query from a user device that maps to a product; and transmitting a search results page including at least one item of the set of items and the first image to the user device based on the user interaction metric for the first image.

In some examples of the methods, apparatus, and non-transitory computer-readable media described herein, sending the search results page may include operations, features, means, or instructions for sending the search results page including the first image to the user device based on the network bandwidth measurement satisfying the bandwidth threshold.

Some examples of the methods, apparatus, and non-transitory computer-readable media described herein may also include operations, features, means, or instructions for monitoring user interaction with the set of images via one or more search results pages to generate a user interaction metric for each image of the set of images, wherein ranking the set of images may be based on applying a machine learning model to the user interaction metric.

In some examples of the methods, apparatus, and non-transitory computer-readable media described herein, monitoring user interaction with a set of images may include operations, features, means, or instructions for monitoring, via one or more search results pages, to generate a user interaction metric: a user viewing time of one or more images in the set of images, a user image zoom indicator of one or more images in the set of images, an item sale price for each item in at least one sold subset of the set of items, or any combination thereof.

Some examples of the methods, apparatus, and non-transitory computer-readable media described herein may also include operations, features, means, or instructions for performing the steps of: monitoring user interaction with the first image via one or more search results pages to generate updated user interaction metrics for the first image; ranking the set of images based on the updated user interaction metrics to identify a second image in the set of images of the product; receiving a second search query from the user device or a second user device that may be mapped to the product; and sending a second search results page including a second image, which may be the same as or different from the first image, to the user device or a second user device.

Some examples of the methods, apparatus, and non-transitory computer-readable media described herein may also include operations, features, means, or instructions for receiving a list of at least one item in a set of items and one or more images in a set of images that may be associated with the list, wherein the first image is different from each of the one or more images that may be associated with the list.

Some examples of the methods, apparatus, and non-transitory computer-readable media described herein may also include operations, features, means, or instructions for receiving a list of at least one item in a set of items, wherein the list does not have any images associated with the list in a set of images.

In some examples of the methods, apparatus, and non-transitory computer-readable media described herein, sending the search results page may include operations, features, means, or instructions for sending the search results page to the user device that includes a first image of a list associated with a first item in the set of items and a first image of a list associated with a second item in the set of items.

Some examples of the methods, apparatus, and non-transitory computer-readable media described herein may also include operations, features, means, or instructions for performing the steps of: performing image classification on the set of images based on extracting one or more image features from the set of images; generating a confidence match score for each item in the set of items based on the image features; and mapping the set of items to the product based on the confidence match score.

A method of determining which image of a set of images to present in a search results page for a product is described. The method may include a memory device to store instructions and a processor that when executed cause a system to perform operations comprising: performing image ranking that ranks the set of images to identify a first image in the set of images of the product based on the user interaction metric for each image of the set of images; receiving a search query from a user device that maps to a product; and transmitting a search results page including at least one item of the set of items and the first image to the user device based on the user interaction metric for the first image.

An apparatus for determining which image of a set of images to present in a search results page for a product is described. The apparatus may include a processor, a memory coupled to the processor, and instructions stored in the memory. The instructions are executable by the processor to cause the apparatus to: cause the memory device to store instructions and cause the processor, when executing the instructions, to cause the system to perform operations comprising: performing image ranking that ranks the set of images to identify a first image in the set of images of the product based on the user interaction metric for each image of the set of images; receiving a search query from a user device that maps to a product; and transmitting a search results page including at least one item of the set of items and the first image to the user device based on the user interaction metric for the first image.

Another apparatus for determining which image of a set of images to present in a search results page for a product is described. The apparatus may comprise the following means: a memory device to store instructions and a processor that when executed cause a system to perform operations comprising: performing image ranking that ranks the set of images to identify a first image in the set of images of the product based on the user interaction metric for each image of the set of images; receiving a search query from a user device that maps to a product; and transmitting a search results page including at least one item of the set of items and the first image to the user device based on the user interaction metric for the first image.

A non-transitory computer-readable medium storing code for determining which image of a set of images to present in a search results page for a product is described. The code may include instructions executable by a processor to: cause the memory device to store instructions and cause the processor, when executing the instructions, to cause the system to perform operations comprising: performing image ranking that ranks the set of images to identify a first image in the set of images of the product based on the user interaction metric for each image of the set of images; receiving a search query from a user device that maps to a product; and transmitting a search results page including at least one item of the set of items and the first image to the user device based on the user interaction metric for the first image.

In some examples of the methods, apparatus, and non-transitory computer-readable media described herein, sending the search results page may include operations, features, means, or instructions for sending the search results page including the first image to the user device based on the network bandwidth measurement satisfying the bandwidth threshold.

Some examples of the methods, apparatus, and non-transitory computer-readable media described herein may also include operations, features, means, or instructions for monitoring user interaction with the set of images via one or more search results pages to generate a user interaction metric for each image of the set of images, wherein ranking the set of images may be based on applying a machine learning model to the user interaction metric.

In some examples of the methods, apparatus, and non-transitory computer-readable media described herein, monitoring user interaction with a set of images may include operations, features, means, or instructions for monitoring, via one or more search results pages, to generate a user interaction metric: a user viewing time of one or more images in the set of images, a user image zoom indicator of one or more images in the set of images, an item sale price for each item in at least one sold subset of the set of items, or any combination thereof.

Some examples of the methods, apparatus, and non-transitory computer-readable media described herein may also include operations, features, means, or instructions for performing the steps of: monitoring user interaction with the first image via one or more search results pages to generate updated user interaction metrics for the first image; ranking the set of images based on the updated user interaction metrics to identify a second image in the set of images of the product; receiving a second search query from the user device or a second user device that may be mapped to the product; and sending a second search results page including a second image, which may be the same as or different from the first image, to the user device or a second user device.

Some examples of the methods, apparatus, and non-transitory computer-readable media described herein may also include operations, features, means, or instructions for receiving a list of at least one item in a set of items, wherein the list does not have any images associated with the list in a set of images.

In some examples of the methods, apparatus, and non-transitory computer-readable media described herein, sending the search results page may include operations, features, means, or instructions for sending the search results page to the user device that includes a first image of a list associated with a first item in the set of items and a first image of a list associated with a second item in the set of items.

Some examples of the methods, apparatus, and non-transitory computer-readable media described herein may also include operations, features, means, or instructions for receiving a list of at least one item in a set of items and one or more images in a set of images that may be associated with the list, wherein the first image is different from each of the one or more images that may be associated with the list.

Some examples of the methods, apparatus, and non-transitory computer-readable media described herein may also include operations, features, means, or instructions for performing the steps of: performing image classification on the set of images based on extracting one or more image features from the set of images; generating a confidence match score for each item in the set of items based on the image features; and mapping the set of items to the product based on the confidence match score.

A method of determining which image of a set of images to present in a search results page for a product is described. The method can comprise the following steps: receiving a set of images of a set of items associated with a product; performing image ranking that ranks the set of images to identify a first image in the set of images of the product based on the user interaction metric for each image of the set of images; receiving a search query from a user device that maps to a product; and transmitting a search results page including at least one item of the set of items and the first image to the user device based on the user interaction metric for the first image.

An apparatus for determining which image of a set of images to present in a search results page for a product is described. The apparatus may include a processor, a memory coupled to the processor, and instructions stored in the memory. The instructions are executable by the processor to cause the apparatus to: receiving a set of images of a set of items associated with a product; performing image ranking that ranks the set of images to identify a first image in the set of images of the product based on the user interaction metric for each image of the set of images; receiving a search query from a user device that maps to a product; and transmitting a search results page including at least one item of the set of items and the first image to the user device based on the user interaction metric for the first image.

Another apparatus for determining which image of a set of images to present in a search results page for a product is described. The apparatus may comprise means for performing the steps of: receiving a set of images of a set of items associated with a product; performing image ranking that ranks the set of images to identify a first image in the set of images of the product based on the user interaction metric for each image of the set of images; receiving a search query from a user device that maps to a product; and transmitting a search results page including at least one item of the set of items and the first image to the user device based on the user interaction metric for the first image.

A non-transitory computer-readable medium storing code for determining which image of a set of images to present in a search results page for a product is described. The code may include instructions executable by a processor to perform the steps of: receiving a set of images of a set of items associated with a product; performing image ranking that ranks the set of images to identify a first image in the set of images of the product based on the user interaction metric for each image of the set of images; receiving a search query from a user device that maps to a product; and transmitting a search results page including at least one item of the set of items and the first image to the user device based on the user interaction metric for the first image.

In some examples of the methods, apparatus, and non-transitory computer-readable media described herein, sending the search results page may include operations, features, means, or instructions for sending the search results page including the first image to the user device based on the network bandwidth measurement satisfying the bandwidth threshold.

Drawings

Fig. 1 illustrates an example of a server system supporting network and speed enhancement for distributing unified images via a computer network, according to aspects of the present disclosure.

Fig. 2 illustrates an example of an application flow supporting network and speed enhancement for distributing unified images via a computer network, according to aspects of the present disclosure.

FIG. 3 illustrates an example of a web page supporting a network and speed enhancement for distributing unified images via a computer network, according to aspects of the present disclosure.

FIG. 4 illustrates an example of a web page supporting a network and speed enhancement for distributing unified images via a computer network, according to aspects of the present disclosure.

Fig. 5 illustrates an example of a process flow supporting network and speed enhancement for distributing unified images via a computer network, in accordance with aspects of the present disclosure.

Fig. 6 illustrates a block diagram of a device supporting a network and speed enhancement for distributing unified images via a computer network, in accordance with aspects of the present disclosure.

FIG. 7 illustrates a block diagram of an image machine learning analysis component that supports network and speed enhancement for distributing unified images via a computer network in accordance with aspects of the present disclosure.

Fig. 8 illustrates a diagram of a system including a device supporting a network and speed enhancement for distributing unified images via a computer network, in accordance with aspects of the present disclosure.

Fig. 9-11 show flow diagrams illustrating methods of supporting a network and speed enhancement for distributing unified images via a computer network, according to aspects of the present disclosure.

Detailed Description

Computer network bandwidth is a limited resource and may refer to the rate at which data is transferred between various computing devices. For example, the internet may be a network that transmits data for software applications ("Apps") and websites among a set of computers. At some point, the amount of data transmitted over a computer network may occupy a significant amount of available network bandwidth. Rather than immediately transmitting data from one device to another, the data may be buffered on one device due to network congestion, which may reduce the ability of the network to transmit the requested data to the requesting device in a timely manner. This delay may be referred to as latency and may degrade the end user experience.

The techniques described herein may provide network and speed enhancements for distributing unified images via a computer network. In an example, the server system may host an online application (e.g., a website or app) that is accessible by the end-user client computing device via a computer network. In an example, the online application may be a customer-oriented website of an online marketplace (e.g., an online retail platform) where users may purchase goods and/or services via the online application. In some cases, the online marketplace may allow a seller (e.g., a business or user) to set a price for an item for sale. An item may refer to a product having a particular set of characteristics. In some examples, the online marketplace may implement an online auction in which sellers may submit bids for items at desired prices.

The online application may provide a graphical user interface that may be presented on a user device, where a seller may generate a listing of one or more items (e.g., products, services, etc.) that the seller wishes to sell. As part of generating the list, in some examples, the online application may: prompting the seller to upload an image (e.g., a photograph) of the item for sale; inputting item description, list title and item Universal Product Code (UPC); providing a sale price or a starting bid for an online auction; including an immediate purchase price for the item; or any combination thereof. The seller may utilize the online application to list a plurality of items of the same type or various items of different item types (e.g., different types of products) for sale. Multiple sellers may also upload a list of the same items or a list of similar items that are slightly different (e.g., size, color, age, etc.).

A buyer can access an online application and browse different listings of marketable items from one or more sellers using his user device (e.g., buyer device). A buyer may enter a search query describing an item (e.g., a product) that the user may wish to purchase via a user device presenting a graphical user interface provided by an online application. The server system may process the query to identify at least one product corresponding to the query and one or more seller listings for the product. The server system may send a search results page to the buyer device that includes one or more lists of items to be presented to the buyer.

Network traffic conditions present challenges when providing search results to a buyer user device via a computer network. In some cases, the amount of data included in each search result page may vary based on the number of listings in the search result page and the amount of data included in each listing. In particular, the amount of data included in each list may vary from list to list. The amount of data included in each search results page may also vary based on the amount of data in each listing and the number of listings included. In particular, conventional systems may include one or more images provided by the seller when generating the listings, and the data size of the images included in each listing may vary. Further, when generating a list of the same or similar items, sellers may upload very different images. Sending a list or search results that include multiple lists may consume network bandwidth and may affect network bandwidth utilization. In addition, because the images of the list are of different sizes, with different depictions of the same or similar items, the search results page may include a large amount of data, and the transmission of the search results page may affect network utilization. Further, certain pictures included by the seller in the listing may inadvertently negatively impact the seller's goal of selling items at a desired price (e.g., the highest possible price).

The techniques described herein may provide network and speed enhancements for distributing unified images via a computer network. The server system may employ machine learning techniques to efficiently utilize network bandwidth by selecting representative images (e.g., unified images) for display in a list of marketable items (e.g., products). When creating the list, the server system may sort items listed for sale into a list of specific products and may receive an uploaded image of items listed for sale. In some cases, multiple lists may be mapped to a particular product. In some examples, items mapped to a product may be the exact same version of the product, or the items may change in at least some way (e.g., color), but still map to the same product.

The server system may use machine learning to monitor buyer behavior to determine which images of the product lead to the desired result, and may select representative images (e.g., unified images) of the product based on the monitoring. The desired outcome may be, for example, an increased likelihood of the purchase being made by the buyer, an increased total amount of goods purchased (GMB), etc. For example, machine learning may determine a unified image for a product based on user interaction metrics generated for each vendor upload image associated with a listing of the product. In an example, the machine learning model may generate the user interaction metric based on: the amount of time the prospective buyer spends viewing the image, whether the prospective buyer actually purchased an item listed for sale, whether the prospective buyer enlarged or otherwise manipulated the image, how many images published for the list the prospective buyer selected to view, the purchase price paid by the buyer for the listed item, and the like or any combination thereof. The user interaction metric may be a numerical value of the uploaded image for each seller assigned to each listing of the product.

The machine learning model may generate user interaction metrics for the seller upload images based on determining how well the images are able to achieve the desired result (e.g., selling items at a higher price quickly compared to images of other listings of products). When generating the user interaction metric, the machine learning model may normalize the user interaction metric to account for any differences (e.g., different colors) between the lists. The machine learning model may rank (e.g., arrange in numerical order) the vendor upload images available for the product based on the user interaction metrics, and select a unified image of the product (e.g., select the image whose user interaction metrics have the highest numerical score as the representative image). The machine learning model may also use a feedback loop to iteratively update the selected unified image over time. For example, when network utilization conditions are favorable, the server system may include one or more images from the list in addition to the previously identified unified image in the search results page to generate user interaction metrics for each image, and may use the generated user interaction metrics to determine whether to maintain or change the unified image.

The server system may also monitor network utilization and may intelligently select when to provide a unified image when providing search results. When network utilization is low, the server system may provide a search results page that includes images uploaded by the vendor with or without a unified image (e.g., user interaction metrics for updating the images). When network utilization is high (e.g., a congestion threshold is met), the server system may provide a search results page that includes a unified image instead of one or more images uploaded by the vendor.

Providing a unified image may increase speed, bandwidth utilization, enhance the likelihood of a desired result of the list, and improve the user experience of the buyer. A unified image may also be provided for the listing in the event that the seller does not upload images when generating the listing. Further, in some examples, the server system may provide a single unified image to the buyer device for a search results page that includes multiple listings. In the event that a low bandwidth connection is detected for the user or the network is bandwidth limited, it may be beneficial to provide a single unified image, since the user device may download only one image, rather than many images. Unified image recognition may also improve the speed at which search results are provided to the buyer device.

Aspects of the present disclosure are first described in the context of a server system and data processing. Aspects of the disclosure are then described in the context of application flows, web pages, and process flows. Further, aspects of the present disclosure are illustrated by, and described with reference to, device diagrams, system diagrams, and flow charts relating to networks and speed enhancements for distributing unified images via computer networks.

Fig. 1 shows an example of a system 100 that supports network and speed enhancement for distributing unified images via a computer network, according to aspects of the present disclosure. System 100 includes cloud client 105, user device 110, cloud platform 115, and data center 120. Cloud platform 115 may be an example of a public or private cloud network. Cloud client 105 may access cloud platform 115 through network connection 135. The network may implement the transport control protocol and the internet protocol (TCP/IP) (e.g., the internet), or may implement other network protocols. Cloud client 105 may be an example of a computing device, such as a server (e.g., cloud client 105-a), a smartphone (e.g., cloud client 105-b), or a laptop (e.g., cloud client 105-c). In other examples, cloud client 105 may be a desktop computer, a tablet computer, a sensor, or another computing device or system capable of generating, analyzing, sending, or receiving communications. In some examples, cloud client 105 may be part of an enterprise, a company, a non-profit organization, an original enterprise, or any other organization type.

Cloud client 105 may facilitate communication between data center 120 and one or more user devices 110 to enable an online marketplace. Network connection 130 may include communications, opportunities, purchases, sales, or any other interaction between cloud client 105 and user device 110. Cloud client 105 may access cloud platform 115 to store, manage, and process data communicated via one or more network connections 130. In some cases, cloud client 105 may have an associated security or permission level. Cloud client 105 may access certain applications, data, and database information within cloud platform 115 and may not access other applications, data, and database information based on the associated security or permission level.

User device 110 may interact with cloud client 105 through network connection 130. The network may implement the transport control protocol and the internet protocol (TCP/IP) (e.g., the internet), or may implement other network protocols. The network connection 130 may facilitate the transfer of data over a computer network via e-mail, a website, a text message, mail, or any other suitable form of electronic interaction (e.g., network connections 130-a, 130-b, 130-c, and 130-d). In an example, user device 110 may be a computing device such as a smart phone 110-a, a laptop computer 110-b, or may be a server 110-c or a sensor 110-d. In other cases, user device 110 may be another computing system. In some cases, user device 110 may be operated by a user or group of users. The user or group of users may be customers associated with an enterprise, a manufacturer, or any other suitable organization.

Cloud platform 115 may provide on-demand database services to cloud client 105. In some cases, cloud platform 115 may be an example of a multi-tenant database system. In this case, cloud platform 115 may serve multiple cloud clients 105 with a single software instance. However, other types of systems may be implemented, including but not limited to client server systems, mobile device systems, and mobile network systems. In some cases, the cloud platform 115 may support online applications. This may include support for: sales between buyers and sellers operating the user device 110, services, marketing of products issued by the buyers, social interactions between the buyers and sellers, analytics data such as user interaction metrics, applications (e.g., computer vision and machine learning), and the internet of things. Cloud platform 115 may receive data associated with generating the online marketplace from cloud client 105 over network connection 135, and may store and analyze the data. In some cases, cloud platform 115 may receive data directly from user device 110 and cloud client 105. In some cases, cloud client 105 may develop an application to be run on cloud platform 115. The cloud platform 115 may be implemented using a remote server. In some cases, the remote servers may be located at one or more data centers 120.

Data center 120 may include multiple servers. Multiple servers may be used for data storage, management, and processing. Data center 120 may receive data from cloud platform 115 via connection 140, or directly from cloud client 105 or via network connection 130 between user device 110 and cloud client 105. The data center 120 may utilize multiple redundancies for security purposes. In some cases, the data stored at data center 120 may be backed up by a copy of the data at a different data center (not shown).

The server system 125 can include a cloud client 105, a cloud platform 115, an image machine learning analysis component 145, and a data center 120, which can cooperate with the cloud platform 115 and the data center 120 to enable an online marketplace. In some cases, data processing may occur at any component of server system 125 or at a combination of these components. In some cases, the server may perform data processing. The server may be a cloud client 105 or located at a data center 120.

The image machine learning analysis component 145 may communicate with the cloud platform 115 via a connection 155 and may also communicate with the data center 120 via a connection 150. The image machine learning analysis component 145 may receive signals and inputs from the user device 110 via the cloud client 105 and via the cloud platform 115 or the data center 140.

The server system 125 may include operations similar to those described herein. As described herein, one or more components of the server system 125 (including the image machine learning analysis component 145) can operate to determine which image of a set of images is to be presented in a search results page for a product. The image machine learning analysis component 145 within the server system 125 may receive a set of images of a set of items that may be associated with a product via the vendor user device 110 and the cloud platform 115. The server system 125 and the image machine learning analysis component 145 may perform image ranking of the set of images to identify a first image in the set of images of the product based on the user interaction metric for each image of the set of images. The server system 125 and the image machine learning analysis component 145 can receive a search query from a buyer user device 110 (e.g., any user device 110) that can be mapped to a product. The server system 125 and the image machine learning analysis component 145 can then transmit a search results page including at least one item of the set of items and the unified image to the user device (e.g., any of the user devices 110).

Those skilled in the art will appreciate that one or more aspects of the present disclosure may be implemented in the system 100 to additionally or alternatively solve other problems in addition to those described above. Further, aspects of the present disclosure may provide technical improvements over "conventional" systems or processes as described herein. However, the present specification and drawings include only exemplary technical improvements resulting from implementation of the aspects of the present disclosure, and thus do not represent all technical improvements provided within the scope of the claims.

Fig. 2 illustrates an example of an application flow 200 supporting network and speed enhancement for distributing unified images via a computer network, according to aspects of the present disclosure. The components of the application flow 200 may include components of a server system, such as the server system 125 (as described with reference to FIG. 1) or the server system 125-b (as described with reference to FIG. 5) of the system 100 for implementing an online marketplace. Certain components of the application flow 200 may be within or in communication with a data center (e.g., data center 120) or a cloud platform (e.g., cloud platform 115) or both. The application flow 200 may represent multiple components for selecting a unified image of an image set of a product in order to efficiently utilize available network bandwidth.

The sell flow component 205 can interact with one or more users to generate listings from one or more users or from a "seller" that may intend to sell one or more items (e.g., products) via an online marketplace. The vendor may be a user operating a user device, such as user device 110 or user device 505 described with respect to fig. 1 and 5. Interaction with the vending flow component 205 can prompt the seller to enter a plurality of parameters describing the items to be listed for sale. In an example, the vending flow component 205 can cause the user device 110 to present a graphical user interface for generating the listing. The seller can generate a listing of items (e.g., products) for sale (including a description of the products) and, in some cases, can upload one or more images of the items to the sale flow component 205. The sell flow component 205 can suggest products to the seller for listing based on the product description provided by the seller. In some cases, the vending flow component 205 may cause the seller user device 110 to display a menu for the seller to select a suggested product for listing. In an example, the seller can interact with the sell flow component 205 to generate a listing for a tablet computer (e.g., Apple iPad). The vendor-listed specific Apple iPad may include other features included in the list. For example, the listing may include that the product for sale is Apple iPad Air 64GB with Wi-Fi functionality.

The sell flow component 205 can categorize the listing as being for a particular product in a set of products available for purchase via the online marketplace. The list may map to a particular product where the listed items for sale have the same or similar characteristics, but may allow for some differences between items while still mapping to the same product. In some cases, the seller generating the listing may select or recommend that listing be for a particular product. The listed user recommended products may be updated or changed by the sell flow component 205 or the machine learning training component 220.

In some examples, the sell process component 205 can classify a set of one or more items as being for a product through a product identification mapping process. The product identification mapping process may include analysis of initial products suggested by the seller, confidence analysis of the accuracy of the selection based on title, product details, computer vision analysis of images uploaded by one or more sellers, and the like. The product identification mapping process may also be extended to other similar product clusters using algorithms such as the K Nearest Neighbor (KNN) algorithm. The product identification process can be performed by the sell flow component 205 or the machine learning training component 220.

In some examples, the vending process component 205 or the machine learning training component 220 may execute a computer vision machine learning algorithm to confirm that it is appropriate to classify the item into a particular product category. An example of a computer vision machine learning algorithm may be a convolutional neural network, such as a residual network (e.g., ResNet-50, a residual network with 50 layers). In an example, image classification may be performed on some or all of the images uploaded for a project (e.g., when creating or updating a list) to verify that the project is associated with a product suggested by a user. The computer vision machine learning system may extract one or more image features of the images and determine a confidence match score for each image. The confidence match score may indicate the confidence that the computer vision machine learning algorithm is that the image is a particular product (e.g., an eBay catalog product).

To generate a confidence match score, a computer vision machine learning algorithm may extract one or more image features from the one or more uploaded images to match one or more representative image features of a representative picture set of the product. The one or more image features may include, for example, a shape of the depicted item, a color of the depicted item, one or more edges of the depicted item, and/or the like. The computer vision machine learning algorithm may assign a confidence match score based on the degree to which the extracted one or more image features match the one or more representative image features. In some examples, the confidence match score may be a numerical value.

The computer vision machine learning algorithm may use the vendor upload images to generate confidence match scores with respect to a plurality of products, and may rank the confidence match scores to identify the product with the best degree of match (e.g., the highest confidence match score) for a particular list. The confidence match score may also be used to confirm that a list of products suggested by the seller is appropriate, or to change the relevant products suggested by the seller for the list to other products that better match the uploaded image. For example, if the computer vision machine learning algorithm determines that the confidence match score between the uploaded image and the first product is low, the computer vision machine learning algorithm may determine that the first product indicated by the seller is incorrect or change the product associated with the listing to other products having higher confidence match scores.

The tracking service component 210 may track each listing uploaded by one or more sellers. The tracking service component 210 can forward the listing and the corresponding vendor upload image for storage in the distributed file system component 215. The tracking service component 210 may monitor the behavior of the buyer while viewing one or more listings in the search results page. Examples of search results pages that include listings that may be monitored are also discussed with reference to fig. 3 and 4. The tracking service component 210 may monitor the to-purchase list presented in the search results page and monitor the user's interaction with the product list and pass the user interaction parameters to the distributed file system component 215. The distributed file system component 215 may be an example of a HADOOP application. The distributed file system component 215 may analyze large amounts of data using a network of multiple computers. The distributed file system component 215 can monitor and analyze sales throughout the online application as well as analyze sales based on user interaction parameters detected by the tracking service component 210.

The machine learning training component 220 may use machine learning models to rank the images and select a unified image for the product. For example, where bandwidth is limited and throughput may be improved by providing one unified image to represent each listing in a set of listings for a product rather than providing an upload image for one or more sellers of the listing, the unified image may be included in the search results returned to the prospective buyer.

The machine learning training component 220 may use a machine learning model that selects a unified image for a product based on monitoring buyer interactions with lists presented to other buyers in a search results page. The machine learning model may be a computer algorithm. The machine learning training component 220 may apply a machine learning model to one or more user interaction parameters generated for the product list to identify a unified image. In an example, the user interaction parameters may include a length of time a buyer spends viewing a particular image before purchasing or failing to purchase the listed items. The user interaction parameters may include whether the buyer actually purchased the listed items for sale after viewing the image. The user interaction parameters may include which images of the list were enlarged or manipulated (e.g., enlarged, rotated) by the buyer before purchasing or failing to purchase the listed items. The user interaction parameters may include whether the buyer clicked on multiple images of the list before purchasing or failing to purchase the listed items. The user interaction parameters may include which image of the list the buyer first clicked on before purchasing or failing to purchase the listed items. The user interaction parameters may include whether the buyer selected a thumbnail-sized image presented in the search results page to view a full-sized version of the thumbnail-sized image before purchasing or failing to purchase the listed items. The user interaction parameters may include how many list images the buyer chooses to view before purchasing or failing to purchase the listed items. The user interaction parameters may include a purchase price paid by the buyer for the listed items. The user interaction parameters may include a first purchase price paid by the first buyer for the listed items of the product after viewing the first image relative to a second purchase price paid by the second buyer for the listed items of the product after viewing the second image. One or more user interaction parameters may be generated for one or more of the seller upload images and other images included in the list. In some cases, images included in a search results page may be received from a marketing organization of a product, and one or more user interaction parameters may be generated for such images.

The tracking service component 210 may observe, over time, the buyer's interaction with one or more images of one or more lists of products presented to the buyer on a graphical user interface of a buyer user device (e.g., the user device 110) to generate one or more of these user interaction parameters. The tracking service component 210 can pass these parameters to the machine learning training component 220. The machine learning training component 220 may use one or more of these user parameters, or a combination thereof, to generate a user interaction metric for each image of the list of products.

The machine learning training component 220 may generate user interaction metrics for the images based on determining how well the images are able to achieve the desired results (e.g., selling items at a higher price quickly compared to images of other lists of products). The user interaction metric may apply a weight to some or all of the one or more user interaction parameters to determine a numerical score that may indicate the extent to which the image is capable of achieving a desired result. When generating the user interaction metrics, machine learning training component 220 may normalize the user interaction metrics to account for any differences between the items in the list. The user interaction metric may be a numerical value assigned to each image of each list of products. The machine learning model may rank (e.g., arrange in numerical order) the images available for the product based on the user interaction metrics, and select a unified image of the product (e.g., select the image whose user interaction metrics have the highest numerical score as the representative image). In some cases, user interaction metrics may be generated for the seller uploading images and other images provided by, for example, an organization marketing the product. The unified image may be one of the vendor upload images or an image obtained from another source. When a subsequent search query for a product is received from the same buyer or another buyer, a unified image may be included in the product list presented in the search results page instead of, or in addition to, one or more seller upload images for the product.

In some examples, the machine learning training component 220 may use a feedback loop to iteratively update the selected unified image over time. For example, when network utilization conditions are favorable, the search results page may include one or more images from the list in addition to the previously identified unified image to enable the tracking service component 210 to update one or more user interaction parameters. The machine learning training component 220 may use the one or more updated user interaction parameters to generate an updated user interaction metric for each image, and may use the updated user interaction metric to determine whether to maintain or change the unified image. Thus, a unified image for each product can be determined based on user interaction metrics or a computer vision machine learning system, or both.

Once the unified image is identified for the product, the machine learning training component 220 may forward the unified image and the identification of its product to the data to the caching component 225 using a workflow management platform (e.g., Apache air flow). The data-to-cache component 225 may be an example of a caching layer, such as a memory cache (e.g., memcache) or an unstructured query language (non-SQL or NOSQL) database. The NOSQL database may be an example of a Couchbase database. The data-to-cache component 225 can provide an identification of the unified image and its product for storage in the cache 230.

When a buyer user device (e.g., user device 110) sends a search query using an online application for items listed on an online marketplace for sale, a representational state transfer (REST) component 235 may execute a REST service in response to the query. The REST component 235 may query the cache 230 using the search query to identify particular products in the set of available products and one or more lists that match the search query. In some cases, cache 230 may return identifiers of uploaded images and listings for sellers matching the search query, as well as identifiers of products and corresponding unified images. In some cases, cache 230 may indicate that the seller did not upload images for a particular listing. The REST component 235 can retrieve the vendor upload image (if any) and the unified image from the distributed file system component 215 using the identifier.

The REST component 235 may also monitor or obtain information regarding the current network conditions of the computer network between itself and the buyer user device. Network conditions may include a current level of network congestion, a current cost of transmitting a particular amount of data over a computer network, a network connection type (e.g., low bandwidth, high speed, etc.), and so forth. The REST component 235 can use information about current network conditions to collaborate with the search term and product page component 240 in generating a search results page that includes one or more lists.

In some examples, the REST component 235 may determine a network condition indicative of network congestion. When congested, the search term and product page component 240 can generate a search results page that includes as few as only a unified image for each product listing, without including any vendor upload images. However, the search results page may include links where the buyer user device may individually download one or more of the seller upload images. In other examples, when the network is not congested, the search term and product page component 240 can generate a search results page that includes a unified image for each returned listing in addition to the one or more vendor upload images. The search terms and products page component 240 may then provide a search results page to the buyer user device for presentation to the prospective buyer (e.g., via a graphical user interface).

As the prospective buyer interacts with the search results page, the tracking service component 210 may cooperate with the search terms and products page component 240 to monitor the prospective buyer's behavior to update one or more user interaction parameters stored in the distributed file system component 215 (e.g., user manipulation of an image, whether the user purchased the listed items after viewing the image, etc.).

For example, the machine learning training component 220 may implement a cluster computing framework (e.g., pyspark jobs) that may mine data in the distributed file system component 215 to determine whether the unified image results in a particular desired result (e.g., an increase in the likelihood of purchase or an increase in bandwidth usage efficiency). Accordingly, components of the application flow 200 can monitor buyer behavior over time to establish a feedback loop to train (e.g., continuously train) machine learning models for selecting a unified image of a product. The tracking service component 210 may continue to collect user interaction metrics, and the machine learning training component 220 may iteratively update the unified image based on the updated user interaction metrics. The machine learning training component 220 may use the updated one or more user interaction parameters to update the user interaction metrics for the one or more images, and may use the updated interaction metrics to determine whether to maintain or change a unified image for the product.

Thus, displaying a unified image of a product may increase speed and network bandwidth usage, as one image may be retrieved and downloaded for display with multiple lists of products, rather than displaying multiple images of products. In some examples, displaying multiple images may be an inefficient use of network speed or bandwidth when network speed or bandwidth, or both, are limited.

FIG. 3 illustrates an example of a search results web page 300 that supports a network and speed enhancement for distributing unified images via a computer network in accordance with aspects of the present disclosure. The web page 300 may be an example of a page displaying search results based on a search query entered by a buyer. The web page 300 may be displayed to the prospective buyer on a user device (e.g., user device 110) at a computer, smartphone, or another client-facing user device.

The buyer can access an online application (e.g., a website or smartphone app) of an online marketplace (e.g., presented by the search term and product page component 240) and enter a search query. In an example, a buyer may enter a search for purchasing a tablet computer. In an example, a buyer may enter "Apple iPad Air 264 GB Wi-Fi" as a search query. The search query may result in the display of search results 305 including one or more listings 315 on the buyer user device.

Each list may include an image 310 associated with the list. The search results 305 may include one or more listings generated by sellers (e.g., users interacting with the vending flow component 205 using the user device 110) related to the search query entered by the buyer. The example list 315 can include information about the item for sale (e.g., tablet model 2, 64GB, grey shell), a current bid for the item (if the item is being sold by auction), a price for the item (e.g., if an immediate purchase function is used), an option for other sellers viewing the item to upload images, and so forth. In the depicted example, search results 305 include lists 315-a, 315-b, 315-c, and 315-d, and each list is associated with the same product (e.g., a "tablet model 2, 64 GB" product). In some cases, each item referenced in list 315 may be for the same product, but some features thereof may differ from other lists of products. For example, the housing colors of the tablet computers in some projects may be different, but each tablet computer may have the same model (e.g., model 2) and the same storage capacity (e.g., 64 GB).

The same seller or set of sellers may have generated listings 315-a, 315-b, 315-c, and 315-d. In generating listings 315-a, 315-b, 315-c, and 315-d, one or more sellers may upload a different set of images for each listing 315, even if each listing is for the same or similar product (e.g., the Apple iPad Air 264 GB product). For example, the list 315-a may be for a tablet computer, specifically "Apple iPad Air 264 GB Wi-Fi, 9.7 inches, space Gray, level A". The vendor may upload an image for listing 315-a, which may be one or more high resolution stock photographs of the tablet computer. The list 315-b may be "Apple iPad Air 264 GB Wi-Fi + cellular (No Lock) 9.7 inches, space Ash". The seller may upload an image for list 315-b, which may be one or more images taken by the seller himself along with the accessory (e.g., the charger of the tablet computer). The list 315-c may be for "Apple iPad Air 264 GB Wi-Fi, 9.7 inches, space Ash". The seller may upload an image for listing 315-c, which may be one or more low resolution or obscured stock photographs of the item. List 315-d may be for "Apple iPad Air 264 GB Wi-Fi, secondhand," and the vendor may not upload any images when generating list 315-d.

In the depicted example, an image 310 is displayed with each list 315, and images 310-a, 310-b, 310-c, and 310-d are shown. For example, images 310-a, 310-b, 310-c, and 310-d may be thumbnail-sized versions of the images, and the buyer may select to display a larger version of the same image. The machine learning techniques described herein may be used to select a unified (e.g., representative) image for the product, and some or all of the list 315 of products may display the same unified image. For example, images 310-a, 310-b, 310-c, and 310-d may each be the same unified image for the same product. In some cases, the seller may not upload any images when generating the listing 315, and the search results page 305 may include a unified image of the listing. For example, listing 315-d may not have any seller upload images, while image 310-d corresponding to listing 315-d may be a unified image (rather than displaying an empty box). In some examples, the search results page 305 may display a list 315 from a plurality of products, a first subset of the list (e.g., the lists 315-a, 315-b) may each display a first unified image of a first product of the plurality of products, and a second subset of the list (e.g., the lists 315-c, 315-d) may display a second unified image of a second product of the plurality of products, wherein the first unified image is different from the second unified image.

The tracking service component 210 of the server system 125 as described herein may monitor the user's interactions with each of the list 315 and the images 310 presented in the search results page 305. In some cases, at least some of images 310-a, 310-b, 310-c, and 310-d may be different from each other when network conditions are not congested. The tracking service component 210 may generate updates for one or more user interaction parameters for updating the user interaction metrics for one or more of the images 310-a, 310-b, 310-c, and 310-d. For example, the user may take longer to view image 310-b of list 315-b. The user may zoom in on the image 310-a of the list 315-a. The machine learning component 220 can analyze the user interaction metrics to determine whether to keep a previously identified unified image as a unified image of the product, or possibly change to a different unified image. The different unified images can be vendor upload images or different images selected by the machine learning component 220.

In some examples, each user interaction parameter may be ranked differently and analyzed differently based on the machine learning analysis performed on the image 310 and the sales results of the list 315. For example, in some cases, the user may zoom in on image 310-a and may eventually purchase a product in list 315-a. In this case, the scaling metric may positively increase the score of the image 310-a when determining the unified image. In another case, the user may enlarge the image 310-c of the product in the list 315-c, but the buyer may eventually purchase a different product, or may not purchase it. For example, image 310-c may be a low quality image, which may be the reason the user zoomed in on the image. In this case, the scaling parameter may be a negative score in determining a unified image of the product, and image 310-c may not be likely to be selected as the unified image. The unified image may be presented to the prospective buyer in other configurations.

In another example, lists 315-a and 315-c may each be for the same first product, and lists 315-b and 315-d may be for the same second product, but the first and second products may be different. Thus, images 310-a and 310-c may be the same, and images 310-b and 310-d may be the same, but different from images 310-a and 310-c. Thus, in this example, less data (e.g., data for two images instead of four images) may be transmitted over the network, which may result in less bandwidth utilization to transmit the search results page because less than one image is downloaded per list 315. In some examples, list 315-a and list 315-c may each be for the same first product, list 310-b may be for a second product different from the first product, and list 310-d may be for a third product different from the first product and the second product. In this example, images 310-a and 310-c may be the same, and images 310-b and 310-d may then be different from each other and from images 310-a and 310-c. In this case, three images may be downloaded instead of four, which may also result in lower bandwidth utilization.

Fig. 4 illustrates an example of a web page 400 supporting a network and speed enhancement for distributing unified images via a computer network, according to aspects of the present disclosure. The web page 400 may be an example of a page displaying search results based on a search query entered by a buyer. The web page 400 may be displayed to the user on a buyer user device (e.g., user device 110), which may be a computer, a smart phone, or another client-oriented user device. Web page 400 may be an example of a web page displayed to a user based on a unified image selection as described herein. The web page 400 may be displayed in situations where network bandwidth is low or where data available to download images for display to a user is limited.

In some examples, the prospective buyer may enter a search query, which may be provided to the server system 125. The buyer may search for a tablet computer similar to the example provided in fig. 3. Server system 125 may map the search query to products associated with the search query. The server system 125 may determine a unified image of the product using the techniques described herein. In the depicted example, web page 400 may display search results 405 including lists 415-a, 415-b, 415-c, and 415-d. The search results 405 may include a single image 410 that is a unified image of the product, rather than including multiple instances of the same unified image.

For example, the server system 125 may determine a unified image 410 of a product (e.g., a tablet computer product). The server system 125 may also identify that the computer network is being congested. The server system 125 may respond to the buyer search query using the unified image 410 for the identified listings 415-a, 415-b, 415-c, and 415-d and may not include any seller upload images to reduce the amount of data in the search results page 125 that is communicated to the buyer user device via a computer network. In some cases, the data transfer that includes search results page 125 may include instructions to: having the buyer user device display only a single instance of the unified image associated with the plurality of listings (as shown in fig. 4) or display multiple instances of the unified image with each instance of the unified listing displayed within each listing (as shown in fig. 3).

Thus, the techniques described herein may allow a server system hosting an online application of an online marketplace to present a unified image of a product to enhance the likelihood of a desired result as well as to enhance network communications. These techniques may be applied when low network bandwidth connections, network congestion, etc. are identified, and may be used to increase the speed of searching and image downloading. The server system may determine: by providing a unified image of a product (e.g., the best representative image) while also downloading one image for multiple listings 415, rather than downloading each image that the vendor selected in generating each product listing 415, the unified image may reduce bandwidth, increase speed, and improve user experience.

For example, when network speed, network bandwidth, or both satisfy a congestion threshold indicating network congestion, server system 125 may determine to include unified image 410 in the search results page. If or when the network speed or network bandwidth, or both, no longer satisfy the congestion threshold, indicating that the network is currently not congested, server system 125 may select to display each product listing in a search results page along with the vendor upload image, for example, to allow the unified image to be updated using machine learning techniques. The system may also determine to display a unified image based on other criteria. For example, if the seller generates multiple product listings without an image, or if the images of the multiple product listings are ranked below (based on a threshold) the determined unified image.

Fig. 5 illustrates an example of a process flow 500 supporting network and speed enhancement for distributing unified images via a computer network in accordance with aspects of the present disclosure. The process flow 500 may include a server system 125-b, a buyer user equipment 505-a and a seller user equipment 505-b. The server system 125-b may be an example of the server system 125 as described with reference to FIG. 1. The buyer user equipment 505-a and the seller user equipment 505-b may be examples of user equipment 110 as described with reference to FIG. 1. The seller user device 505-b may be a device used by a seller to generate a listing of items for sale via an online marketplace, and may have the option of uploading images of the items when creating the listing. The buyer user device 505-a may be a device that is available to the buyer on the go to access an online marketplace (e.g., via a smart phone app or website), search listed items for sale, and complete a purchase transaction.

At 515, the server system 125-b can receive a set of images of a set of items that can be associated with a product. For example, the at least one seller user device 505-b may interact with the server system 125-b to generate at least one listing of at least one item for sale via the online marketplace. For each listing, server system 125-b may allow vendor user device 505-b to upload one or more images of items listed in the listing for sale. The collection of images may be received over time and the server system 125-b may map the list to products.

In some examples, the server system 125-b may receive a list of at least one item in the set of items and one or more images in the set of images that may be associated with the list, wherein the first image is different from each of the one or more images that may be associated with the list. In some examples, the server system 125-b may receive a list of at least one item in the set of items, where the list does not have any images associated with the list in the set of images.

At 520, in some examples, server system 125-b may perform image classification, such as computer vision analysis, on the set of images based on extracting one or more image features from the set of images. In some examples, the server system 125-b may generate a confidence match score for each item in the set of items based on the image features. In some examples, the server system 125-b may then map the set of items to the product based on the confidence match scores.

At 525, the server system 125-b may perform image ranking that ranks the set of images based on the user interaction metric for each image of the set of images to identify a first image in the set of images of the product. For example, server system 125-b may monitor user interactions with the set of images via one or more search results pages to generate a user interaction metric for each image in the set of images, wherein ranking the set of images may be based on applying a machine learning model to the user interaction metrics. Monitoring user interaction with the image collection may include monitoring, via one or more search results pages, to generate user interaction metrics: a user viewing time of one or more images in the set of images, a user image zoom indicator of one or more images in the set of images, an item sale price for each item in at least one sold subset of the set of items, or any combination thereof.

In some examples, server system 125-b may monitor user interaction with the first image via one or more search results pages to generate updated user interaction metrics for the first image. The server system 125-b may then rank the set of images based on the updated user interaction metrics to identify a second image in the set of images of the product.

At 530, the server system 125-b may receive a search query from the buyer user device 505-a that may be mapped to a product. Server system 125-b may map the search query to a product, where the text entered into the search query best matches the product. The server system 125-b may receive a second search query from the buyer user device 505-a or another user device that may be mapped to a product.

At 535, the server system 125-b may transmit a search results page including at least one item of the set of items and the first image to the buyer user device 505-a based on the user interaction metric for the first image. In some examples, the server system 125-b may send a search results page to the buyer user device 505-a that includes a unified image of a list associated with a first item in the set of items and a unified image of a list associated with a second item in the set of items. The server system 125-b may send a search results page including the unified image to the buyer user device 505-a based on the network bandwidth measurement satisfying a bandwidth threshold (e.g., indicating network congestion). The server system 125-b may then send a second search results page, which may be mapped to a product, to the buyer user device 505-a or a second buyer user device.

At 540, the server system 125-b may monitor the buyer's interaction with the search results page as described herein. Server system 125-b may update one or more user interaction values based on user interactions with the search results page and may apply machine learning to the one or more updated user interaction values to generate updated user interaction metrics for one or more of the unified images presented in the search results page or other vendor upload images.

At 545, the server system 125-b may perform image ranking that ranks the set of images based on the updated user interaction metrics to identify a second unified image (e.g., a second image) in the set of images of the product. In some cases, the server system 125-b may maintain the same unified image, or may change to a different unified image for the product based on the image ranking.

Fig. 6 illustrates a block diagram 600 of a device 605 supporting network and speed enhancement for distributing unified images via a computer network in accordance with aspects of the present disclosure. The device 605 may include an input module 610, an image machine learning analysis component 615, and an output module 640. The device 605 may also include a processor. Each of these components may communicate with each other (e.g., via one or more buses). In some cases, device 605 may be an example of server system 125, and may include, for example, a user terminal, a database server, or a system containing multiple computing devices.

Input module 610 may manage input signals for device 605. Example (b)For example, the input module 610 may recognize an input signal based on interaction with a modem, keyboard, mouse, touch screen, or similar device. These input signals may be associated with user input or processing on other components or devices. In some cases, the input module 610 may utilize input devices such as

Such as an operating system or another known operating system. Input module 610 may send aspects of these input signals to other components of device 605 for processing. For example, the input module 610 can send input signals to the image machine learning analysis component 615 to support network and speed enhancements for distributing unified images via a computer network. In some cases, the input module 610 may be a component of an input/output (I/O) controller 815 as described with reference to fig. 8.

The image machine learning analysis component 615 can include a sell flow component 620, a machine learning training component 625, a representational state transfer component 630, and a search terms and products page component 635. The image machine learning analysis component 615 can be an example of an aspect of the image machine learning analysis component 705 or 810 (described with reference to fig. 7 and 8).

The image machine learning analysis component 615 and/or at least some of its various subcomponents may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions of the image machine learning analysis component 615 and/or at least some of its various subcomponents may be performed by a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described in this disclosure. The image machine learning analysis component 615 and/or at least some of its various subcomponents can be physically located at various locations, including being distributed such that portions of functionality are implemented by one or more physical devices at different physical locations. In some examples, the image machine learning analysis component 615 and/or at least some of its various subcomponents may be separate and distinct components, according to aspects of the present disclosure. In other examples, the image machine learning analysis component 615 and/or at least some of its various subcomponents, in accordance with aspects of the present disclosure, may be combined with one or more other hardware components, including but not limited to an I/O component, a transceiver, a network server, another computing device, one or more other components described in the present disclosure, or a combination thereof.

The vending flow component 620 can receive a set of images of a set of items associated with a product. The machine learning training component 625 may perform image ranking that ranks the set of images based on the user interaction metric for each image of the set of images to identify a first image in the set of images of the product. The representational state transfer component 630 may receive a search query from a user device that maps to a product. The search terms and products page component 635 may send a search results page including at least one term in the set of terms and the first image to the user device based on the user interaction metric of the first image.

The output module 640 may manage the output signals of the device 605. For example, the output module 640 can receive signals from other components of the device 605 (e.g., the image machine learning analysis component 615) and can send such signals to other components or devices. In some particular examples, output module 640 may send output signals for display in a user interface, for storage in a database or data store, for further processing at a server or cluster of servers, or for any other process on any number of devices or systems. In some cases, the output module 640 may be a component of the I/O controller 815 as described with reference to fig. 8.

Fig. 7 illustrates a block diagram 700 of an image machine learning analysis component 705 that supports network and speed enhancement for distributing unified images via a computer network in accordance with aspects of the present disclosure. The image machine learning analysis component 705 may be an example of an aspect of the image machine learning analysis component 615 or the image machine learning analysis component 810 described herein. The image machine learning analysis component 705 may include a sell flow component 710, a machine learning training component 715, a representational state transfer component 720, a search terms and products page component 725, a tracking service component 730, a distributed file system component 735, and a data to cache component 740. Each of these modules may communicate with each other directly or indirectly (e.g., via one or more buses).

The vending flow component 710 can receive a set of images of a set of items associated with a product. In some examples, the sell flow component 710 can receive a list of at least one item in the set of items and one or more images in the set of images associated with the list, wherein the first image is different from each of the one or more images associated with the list. In some examples, the sell flow component 710 can receive a list of at least one item in the set of items, wherein the list does not have any images associated with the list in the set of images.

The machine learning training component 715 may perform image ranking that ranks the set of images based on the user interaction metric for each image of the set of images to identify a first image in the set of images of the product. In some examples, machine learning training component 715 may perform image classification on the set of images based on extracting one or more image features from the set of images. In some examples, the machine learning training component 715 may generate a confidence match score for each item in the set of items based on the image features.

The representational state transfer component 720 may receive a search query from a user device that maps to a product. In some examples, the representational state transfer component 720 may receive a second search query from the user device or a second user device that maps to the product.

The search term and product page component 725 can send a search results page including at least one term in the set of terms and the first image to the user device based on the user interaction metric of the first image. In some examples, the search term and product page component 725 may send a search results page to the user device that includes a first image of a list associated with a first term in the set of terms and a first image of a list associated with a second term in the set of terms. In some examples, the search term and product page component 725 may send a search results page including the first image to the user device based on the network bandwidth measurement satisfying the bandwidth threshold. In some examples, the search term and product page component 725 may send a second search results page to the user device or a second user device that includes a second image that is the same as or different from the first image.

The tracking service component 730 and the machine learning training component 715 may cooperate to monitor user interactions with the set of images via one or more search results pages and generate a user interaction metric for each image in the set of images, wherein ranking the set of images is based on applying a machine learning model to the user interaction metrics. In some examples, tracking service component 730 may monitor the following via one or more search results pages to generate user interaction metrics: a user viewing time of one or more images in the set of images, a user image zoom indicator of one or more images in the set of images, an item sale price for each item in at least one sold subset of the set of items, or any combination thereof. In some examples, tracking service component 730 may monitor user interaction with the first image via one or more search results pages to generate an updated user interaction metric for the first image.

The machine learning training component 715 may rank the set of images based on the updated user interaction metrics to identify a second image in the set of images of the product. The machine learning training component 715 may also map the set of items to the product based on the confidence match scores.

The distributed file system component 735 may store listings and vendor upload images. The data-to-cache component 740 can cache a unified image for each product.

Fig. 8 illustrates a diagram of a system 800 including a device 805 that supports a network and speed enhancement for distributing unified images via a computer network, according to aspects of the present disclosure. Device 805 may be an example of or include components of a server system or device 605 as described herein. Device 805 can include components for two-way data communications (including components for sending and receiving communications), including an image machine learning analysis component 810, an I/O controller 815, a database controller 820, a memory 825, a processor 830, a database 835, and an image machine learning analysis component 855. These components may be in electronic communication via one or more buses, such as bus 840.

The image machine learning analysis component 810 can be an example of the image machine learning analysis component 615 or 705 as described herein. For example, the image machine learning analysis component 810 can perform any of the methods or processes described above with reference to fig. 6 and 7. In some cases, the image machine learning analysis component 810 can be implemented in hardware, software executed by a processor, firmware, or any combination thereof.

I/O controller 815 may manage input signals 845 and output signals 850 for device 805. I/O controller 815 may also manage peripheral devices that are not integrated into device 805. In some cases, I/O controller 815 may represent a physical connection or port to a peripheral device. In some cases, I/O controller 815 may utilize a mechanism such as Such as an operating system or another known operating system. In other cases, I/O controller 815 may represent or interact with a modem, keyboard, mouse, touch screen, or similar device. In some cases, I/O controller 815 may be implemented as part of a processor. In some cases, a user may interact with device 805 via I/O controller 815 or via a hardware component controlled by I/O controller 815.

Database controller 820 may manage data storage and processing in database 835. In some cases, a user may interact with database controller 820. In other cases, database controller 820 may operate automatically without user interaction. Database 835 may be an example of a single database, a distributed database, multiple distributed databases, a data store, a data lake, or an emergency backup database.

The memory 825 may include Random Access Memory (RAM) and read-only memory (ROM). The memory 825 may store computer-readable, computer-executable software comprising instructions that, when executed, cause the processor to perform various functions described herein. In some cases, memory 825 may contain a basic input/output system (BIOS), or the like, which may control basic hardware or software operations, such as interaction with peripheral components or devices.

The processor 830 may include intelligent hardware devices (e.g., a general purpose processor, a DSP, a Central Processing Unit (CPU), a microcontroller, an ASIC, an FPGA, a programmable logic device, discrete gate or transistor logic components, discrete hardware components, or any combination thereof). In some cases, processor 830 may be configured to operate a memory array using a memory controller. In other cases, a memory controller may be integrated into processor 830. The processor 830 may be configured to execute computer-readable instructions stored in the memory 825 to perform various functions (e.g., functions or tasks to support networking and speed enhancement for distributing unified images via a computer network).

The image machine learning analysis component 835 may interact with each of the I/O controller 815, the image machine learning analysis component 810, the database controller 820, the database 835, the memory 825, and the processor 830 via the bus 840 to operate a computer-implemented process for determining which image of a set of images to present in a search results page of a product. The process can include receiving a set of images of a set of items associated with a product. These images may be received via I/O controller 815 based on input 845. The process may also include performing image ranking that ranks the set of images based at least in part on the user interaction metric for each image of the set of images to identify a first image in the set of images of the product. The process may also include receiving a search query mapped to a product from a user device via input 845 and I/O controller 815. The process may also include transmitting a search results page including at least one item of the set of items and the first image to the user device via the output 850 and the I/O controller 815 based at least in part on the user interaction metric of the first image.

Fig. 9 shows a flow diagram illustrating a method 900 of supporting a network and speed enhancement for distributing unified images via a computer network, in accordance with aspects of the present disclosure. The operations of method 900 may be implemented by a server system or components thereof as described herein. For example, the operations of method 900 may be performed by an image machine learning analysis component, as described with reference to fig. 6-8. In some examples, the server system may execute sets of instructions to control the functional elements of the server system to perform the functions described below. Additionally or alternatively, the server system may use dedicated hardware to perform the functional aspects described below.

At 905, the server system can receive a set of images of a set of items associated with a product. 905 operations may be performed according to the methods described herein. In some examples, aspects of the operations of 905 may be performed by a sell flow component, as described with reference to fig. 6-8.

At 910, the server system may perform image ranking that ranks the set of images based on the user interaction metric for each image of the set of images to identify a first image in the set of images of the product. 910 may be performed according to the methods described herein. In some examples, aspects of the operations of 910 may be performed by a machine learning training component, as described with reference to fig. 6-8.

At 915, the server system may receive a search query from the user device that maps to a product. 915 may be performed according to the methods described herein. In some examples, aspects of the operations of 915 may be performed by a search term and product page component, as described with reference to fig. 6-8.

At 920, the server system can transmit a search results page including at least one item of the set of items and the first image to the user device based on the user interaction metric of the first image. Operations of 920 may be performed according to methods described herein. In some examples, aspects of the operations of 920 may be performed by the search term and product page component, as described with reference to fig. 6-8.

Fig. 10 shows a flow diagram illustrating a method 1000 of supporting a network and speed enhancement for distributing unified images via a computer network, in accordance with aspects of the present disclosure. The operations of method 1000 may be implemented by a server system or components thereof as described herein. For example, the operations of method 1000 may be performed by an image machine learning analysis component, as described with reference to fig. 6-8. In some examples, the server system may execute sets of instructions to control the functional elements of the server system to perform the functions described below. Additionally or alternatively, the server system may use dedicated hardware to perform the functional aspects described below.

At 1005, the server system can receive a set of images of a set of items associated with a product. The operations of 1005 may be performed in accordance with the methods described herein. In some examples, aspects of the operations of 1005 may be performed by a sell flow component, as described with reference to fig. 6-8.

At 1010, the server system can perform image ranking that ranks the set of images based on the user interaction metric for each image of the set of images to identify a first image in the set of images of the product. 1010 may be performed according to the methods described herein. In some examples, aspects of the operations of 1010 may be performed by a machine learning training component, as described with reference to fig. 6-8.

At 1015, the server system may monitor user interactions with the set of images via one or more search results pages to generate a user interaction metric for each image of the set of images, wherein ranking the set of images is based on applying a machine learning model to the user interaction metric. 1015 may be performed according to the methods described herein. In some examples, aspects of the operations of 1015 may be performed by a tracking service component and/or a machine learning training component, as described with reference to fig. 6-8.

At 1020, the server system may receive a search query from the user device that maps to a product. 1020 may be performed according to the methods described herein. In some examples, aspects of the operations of 1020 may be performed by a representational state transfer component, as described with reference to fig. 6-8.

At 1025, the server system can transmit a search results page including at least one item of the set of items and the first image to the user device based on the user interaction metric of the first image. 1025 operations may be performed according to the methods described herein. In some examples, aspects of the operations of 1025 may be performed by the search term and product page component, as described with reference to fig. 6-8.

Fig. 11 shows a flow diagram illustrating a method 1100 of supporting a network and speed enhancement for distributing unified images via a computer network, in accordance with aspects of the present disclosure. The operations of method 1100 may be implemented by a server system or components thereof as described herein. For example, the operations of method 1100 may be performed by an image machine learning analysis component, as described with reference to fig. 6-8. In some examples, the server system may execute sets of instructions to control the functional elements of the server system to perform the functions described below. Additionally or alternatively, the server system may use dedicated hardware to perform the functional aspects described below.

At 1105, the server system can receive a set of images of a set of items associated with a product. 1105 may be performed according to the methods described herein. In some examples, aspects of the operations of 1105 may be performed by a sell flow component, as described with reference to fig. 6-8.

At 1110, the server system may perform image classification on the set of images based on extracting one or more image features from the set of images. 1110 may be performed according to the methods described herein. In some examples, aspects of the operations of 1110 may be performed by a machine learning training component, as described with reference to fig. 6-8.

At 1115, the server system can generate a confidence match score for each item in the set of items based on the image features. 1115 operations may be performed in accordance with the methods described herein. In some examples, aspects of the operations of 1115 may be performed by a machine learning training component, as described with reference to fig. 6-8.

At 1120, the server system may map the set of items to the product based on the confidence match score. 1120 may be performed according to the methods described herein. In some examples, aspects of the operations of 1120 may be performed by a tracking service component, as described with reference to fig. 6-8.

At 1125, the server system may perform image ranking that ranks the set of images based on the user interaction metric for each image of the set of images to identify a first image in the set of images of the product. 1125, may be performed according to the methods described herein. In some examples, aspects of the operations of 1125 may be performed by machine learning training components, as described with reference to fig. 6-8.

At 1130, the server system may receive a search query from the user device that maps to a product. The operations of 1130 may be performed according to the methods described herein. In some examples, aspects of the operations of 1130 may be performed by the search term and product page component, as described with reference to fig. 6-8.

At 1135, the server system may transmit a search results page including at least one item of the set of items and the first image to the user device based on the user interaction metric of the first image. 1135 may be performed according to the methods described herein. In some examples, aspects of the operations of 1135 may be performed by the search term and product page component, as described with reference to fig. 6-8.

It should be noted that the above method describes possible embodiments, the operations and steps may be rearranged or otherwise modified, and other embodiments are possible. Further, aspects from two or more methods may be combined.

The description set forth herein in connection with the drawings describes example configurations and is not intended to represent all examples that may be implemented or within the scope of the claims. The term "exemplary" as used herein means "serving as an example, instance, or illustration," rather than "preferred" or "superior to other examples. The detailed description includes specific details for the purpose of providing an understanding of the described technology. However, the techniques may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the concepts of the described examples.

In the drawings, similar components or features may have the same reference numerals. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.

The information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.

The various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).

The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and embodiments are within the scope of the disclosure and the following claims. For example, due to the nature of software, the functions described above may be implemented using software executed by a processor, hardware, firmware, hard wiring, or any combination of these. Features that perform a function may also be physically located at various positions, including being distributed such that portions of the function are performed at different physical locations. Also, as used herein, including in the claims, "or" as used in a list of items (e.g., a list of items beginning with a phrase such as "at least one of" or "one or more of") indicates an inclusive list such that, for example, a list of at least one of A, B or C means a or B or C or AB or AC or BC or ABC (i.e., a and B and C). Also, as used herein, the phrase "based on" should not be construed as a reference to a closed condition set. For example, an exemplary step described as "based on condition a" may be based on both condition a and condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase "based on" should be interpreted in the same manner as the phrase "based, at least in part, on".

Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read-only memory (EEPROM), Compact Disc (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes CD, laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media.

The description herein is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种家庭多人组合的配餐推荐方法和系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!