The oldest known knitting item dates back to Egypt in the Middle Ages, by way of a pair of carefully handcrafted socks. Although handmade clothes have occupied our closets for centuries, a recent influx of high-tech knitting machines have changed how we now create our favorite pieces.
These systems, which have made anything from Prada sweaters to Nike shirts, are still far from seamless. Programming machines for designs can be a tedious and complicated ordeal-- when you have to specify every single stitch, one mistake can throw off the entire garment.
In a new pair of papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have come up with a new approach to streamline the process: a new system and design tool for automating knitted garments.
In one paper, a team created a system called “InverseKnit”, that translates photos of knitted patterns into instructions that are then used with machines to make clothing. An approach like this could let casual users create designs without a memory bank of coding knowledge, and even reconcile issues of efficiency and waste in manufacturing.
“As far as machines and knitting go, this type of system could change accessibility for people looking to be the designers of their own items,'' says Alexandre Kaspar, CSAIL PhD student and lead author on a new paper about the system. “We want to let casual users get access to machines without needed programming expertise, so they can reap the benefits of customization by making use of machine learning for design and manufacturing.”
In another paper, researchers came up with a computer-aided design tool for customizing knitted items. The tool lets non-experts use templates for adjusting patterns and shapes, like adding a triangular pattern to a beanie, or vertical stripes to a sock. You can image users making items customized to their own bodies, while also personalizing for preferred aesthetics.
InverseKnit
Automation has already reshaped the fashion industry as we know it, with potential positive residuals of changing our manufacturing footprint as well.
To get InverseKnit up and running, the team first created a dataset of knitting instructions, and the matching images of those patterns. They then trained their deep neural network on that data to interpret the 2-D knitting instructions from images.
This might look something like giving the system a photo of a glove, and then letting the model produce a set of instructions, where the machine then follows those commands to output the design.
When testing InverseKnit, the team found that it produced accurate instructions 94% percent of the time.
“Current state-of-the-art computer vision techniques are data-hungry, and they need many examples to model the world effectively,” says Jim McCann, assistant professor in the Carnegie Mellon Robotics Institute. “With InverseKnit, the team collected an immense dataset of knit samples that, for the first time, enables modern computer vision techniques to be used to recognize and parse knitting patterns.”
While the system currently works with a small sample size, the team hopes to expand the sample pool to employ InverseKnit on a larger scale. Currently, the team only used a specific type of acrylic yarn, but they hope to test different materials to make the system more flexible.
A tool for knitting
While there’s been plenty of developments in the field - such as Carnegie Mellon’s automated knitting processes for 3-D meshes - these methods can often be complex and ambiguous. The distortions inherent in 3-D shapes hamper how we understand the positions of the items, and this can be a burden on the designers.
To address this design issue, Kaspar and his colleagues developed a tool called “CADKnit”, which uses 2-D images, CAD software, and photo editing techniques to let casual users customize templates for knitted designs.
The tool lets users design both patterns and shapes in the same interface. With other software systems, you’d likely lose some work on either end when customizing both.
“Whether it’s for the everyday user who wants to mimic a friend’s beanie hat, or a subset of the public who might benefit from using this tool in a manufacturing setting, we’re aiming to make the process more accessible for personal customization,'' says Kaspar.
The team tested the usability of CADKnit by having non-expert users create patterns for their garments and adjust the size and shape. In post-test surveys, the users said they found it easy to manipulate and customize their socks or beanies, successfully fabricating multiple knitted samples. They noted that lace patterns were tricky to design correctly and would benefit from fast realistic simulation.
However the system is only a first step towards full garment customization. The authors found that garments with complicated interfaces between different parts - such as sweaters -- didn’t work well with the design tool. The trunk of sweaters and sleeves can be connected in various ways, and the software didn’t yet have a way of describing the whole design space for that.
Furthermore, the current system can only use one yarn for a shape, but the team hopes to improve this by introducing a stack of yarn at each stitch. To enable work with more complex patterns and larger shapes, the researchers plan to use hierarchical data structures that don’t incorporate all stitches, just the necessary ones.
“The impact of 3-D knitting has the potential to be even bigger than that of 3-D printing. Right now, design tools are holding the technology back, which is why this research is so important to the future,” says McCann.
A paper on InverseKnit was presented by Kaspar alongside MIT postdocs Tae-Hyun Oh and Petr Kellnhofer, PhD student Liane Makatura, MIT undergraduate Jacqueline Aslarus, and MIT professor Wojciech Matusik. It was presented at the International Conference on Machine Learning this past June in Long Beach, CA.
A paper on the design tool was led by Kaspar alongside Makatura and Matusik.