In order to create visual content that meets the needs of users, it is often necessary to be able control the pose, expression, shape, and layout of generated objects. Existing approaches to control generative adversarial network (GANs), which are often lacking in flexibility, precision and generality, rely on manually annotated data or a 3D model. In this work, a powerful but less explored method of controlling GANs is studied. This is to "drag", any points of an image to reach target points precisely in a user interactive manner, as shown in Figure 1. DragGAN is a system that combines two components to achieve this. It consists of: 1) a feature based motion supervision which drives the handle point towards the target position and 2) a point tracking approach which uses the discriminative GAN characteristics to keep localizing position of the handle point. DragGAN allows anyone to deform an image, with precise control of where pixels go.