Enabling 3D mode allows for designing pictures with multiple layers, where some pixels protrude further out than others
This may not be a good fit for all images, but can work quite well in many cases
Supported image formats are dependent on your browser's compatibility
Due to the nature of the Lego Art sets, images with transparency aren't fully supported
Be careful when using high resolutions - this can cause performance issues on less powerful machines, especially during pdf generation and for 3D previews
Computing the depth map can be computationally expensive. Be prepared to wait a bit, and be careful, especially if you have a less powerful device.
How does this work?The depth map is computed using a DNN (deep neural network). For the reasons described in the 'about' section, everything is run entirely within the browser, using a modified version of ONNX.js. The model used is MiDaS - more specifically, the small ONNX version which can be found here.
● Click a pixel to override its color
● Click an overridden pixel to restore it
● Right click a pixel to use its color for overrides
● The currently hovered on pixel is also highlighted in the step 4 color canvas
● Make sure to add your override color to 'Available Studs' below for it to show up in step 4
This setting accounts for the fact that pixels represented by round pieces appear slightly darker due to the black background. Disable it if you're using square pieces for pixels.
● This section specifies how many studs of each color you have available to create the image
● By default, this is set to all available bricklink stud colors
● Step 4 of the algorithm cannot run unless you select enough studs to fill the picture ('Missing Studs' must be 0)
● If you're working with an existing set, then clear the available studs and use the mix in option to add in the pieces from your set.
Required Studs:Color | Number Available |
---|
● Some sets use 1x1 tiles, while others use 1x1 studs. Make sure you double check before continuing if you want consistent pixels in your result.
● Depending on the input set of studs, many images don't easily translate to the available color palette. Try to use an image with similar colors to those of the set of studs you have available.
● In some cases, adjusting the colors in step 2 can help greatly, but this doesn't work for many source images. For more robust color adjustments, I recommend making the changes in another tool before remixing the image.
● Click a pixel to increase its height
● Click a pixel to decrease its height
● The currently hovered on pixel is also highlighted in the step 4 color canvas
● If there is a significant loss of color quality between steps 3 and 4, then the set you're using is running out of studs. If you're willing to sacrifice resolution for color accuracy, then changing the resolution in step 1 can help by reducing the number of studs required.
Changing this is useful if you have an image with a large completely uniform background but not enough pixels to fill that background
Longer instructions may be split into multiple files
Color names are Bricklink colors
Depending on your hardware and the resolution you've chosen, the pdf can take quite a few seconds to generate. Be prepared to wait if you're generating instructions for larger resolutions, especially for high quality pdfs. Larger resolutions may also cause some slowness on the page or may not work at all on less powerful devices, so I recommend starting at the default and then going up.
● This is a (very) rough preview of what the 3D effect might look like
● Hover your mouse over the image to vary the perspective
● Make sure your depth map is not blank
● This is unlikely to work well on less powerful devices, since this is generated dynamically
● Keep in mind that the effect varies from browser to browser, can be subtle, and may not be 100% representative of what the physical art piece would look like
● This is the set of plates that may be used to generate depth instructions and part lists
● These pieces are used as padding so that the correct pixels protrude outwards
● Note that larger plates may be difficult to attach/detach from the base
Available Plates:Longer instructions may be split into multiple files
Depending on your hardware and the resolution you've chosen, the pdf can take quite a few seconds to generate. Be prepared to wait if you're generating instructions for larger resolutions, especially for high quality pdfs. Larger resolutions may also cause some slowness on the page or may not work at all on less powerful devices, so I recommend starting at the default and then going up.
Supported image formats are dependent on your browser's compatibility
Due to the nature of the Lego Art sets, images with transparency aren't fully supported
Example use case - this is a full Hogwarts crest created by the algorithm with just 1 copy of set 31201These are some articles and videos featuring Lego Art Remix
Some are quite interesting even outside the context of this tool in particular, since they go into the history of Lego mosaics
Note that some were made when the tool was older
In 2020, The Lego Group released the Lego Art
theme, which allows people to create a predetermined image using lego studs.
Lego Art Remix lets you upload your own image, and then uses computer vision to use the
studs from a Lego Art set that you already have to recreate the image.
This project is not affiliated with The Lego Group
The computer vision techniques used are pretty inexpensive (with the exception of optional depth map generation), and the resolutions being dealt with are naturally quite low, so as of the time of writing, the algorithm runs quite quickly. This allows for it to be run on the client, and on the machines that I tested, it ran in near real time.
The most computationally expensive part of the process, apart from depth map generation, is generating the instructions, since even pdf generation is done client side.
Since it runs almost entirely within the browser (see the source code), no image data is sent to a server and so it's very secure. This also makes it much easier for me to maintain and host. The only server code consists of simple increments to anonymously estimate usage for the purposes for tracking performance in case the static deployment needs to be scaled up, and for the counter in the about section.
Even the deep neural network to compute depth maps is being run entirely within the browser, in a web worker, using a modified version of ONNX.js. I've compiled a version of the library based on this pull request, with a small additional change I made to support the resize operation in v10. The model used is MiDaS - more specifically, the small ONNX version which can be found here. It's incredible that ONNX.js allows us to use amazing work such as this within a web browser.
As of the time of writing, I don't have all of the sets, and I haven't had much time to test. As a result, there's probably a few bugs, so let me know if you find any.
Algorithm improvement ideas are always welcome. Improvements that maintain the efficiency to within a reasonable degree would allow the algorithm to keep running on the client, which I really like.