While humans intuitively manipulate garments and other textiles items swiftly and accurately, it is a significant challenge for robots. A factor crucial to the human performance is the ability to imagine, a priori, the intended result of the manipulation intents and hence develop predictions on the garment pose. This allows us to plan from highly obstructed states, adapt our plans as we collect more information and react swiftly to unforeseen circumstances. Robots, on the other hand, struggle to establish such intuitions and form tight links between plans and observations. This can be attributed in part to the high cost of obtaining densely labelled data for textile manipulation, both in quality and quantity. The problem of data collection is a long standing issue in data-based approaches to garment manipulation. Currently, the generation of high quality and labelled garment manipulation data is mainly attempted through advanced data capture procedures that create simplified state estimations from real-world observations. In this work, however, we propose to generate real-world observations from given object states. To achieve this, we present GARField (Garment Attached Radiance Field) a differentiable rendering architecture allowing data generation from simulated states stored as triangle meshes. Code will be available on https://ddonatien.github.io/garfield-website/