Have you ever browsed a slow site with annoying unecessary huge images that made the browser wait for the images to download before it could do the layout?
Or, perhaps the page did load, then you started to read and suddenly the whole page jumped because a big image filled up some space and you had
to find where you were reading. This happens when the webpage author does not supply WIDTH and HEIGHT attributes to the IMG tag in the HTML code. The browser will not learn about the dimensions of an image before the image is downloaded.
We can either improve authors to include width and height information for every image they have in a webpage, or we can improve technology, the browsers and webservers. How? Well, this is my suggestion:
When requesting something through http you supply a list of content types that the browser understands. These names are called MIME-types. I propose a new MIME-Type, "application/image-metadata". This one should be the most prefered response to the request. The server would, if it supports this mimetype, at least return metadata for the image requested instead of the image itself. Preferably it would return metadata for all images in the same directory, or ideally metadata for all images used in a webpage.
This is fully backwards-compatible, webservers that do not understand the content-type would just give you the image back instead. Browsers that do not support it would not ask for the content-type to start with.
The gains would probably be very noticeable over slow links. Perhaps over high-latency links as well. Do you think it would be worth it?