Creating visual content that aligns with user requirements often necessitates a high degree of flexibility and precision in managing the pose, shape, expression, and arrangement of the generated elements. Traditional methods enhance the controllability of generative adversarial networks (GANs) by relying on manually labeled training datasets or pre-existing 3D models, which frequently fall short in terms of flexibility, accuracy, and adaptability. In this research, we explore a powerful yet relatively underutilized technique for controlling GANs, which allows users to "drag" specific points in an image to accurately reach designated target locations through interactive engagement, as illustrated in Fig.1. Our proposed solution, DragGAN, comprises two primary components: first, a feature-based motion supervision system that guides the handle point toward the intended position; and second, an innovative point tracking method that utilizes the discriminative features of GANs to continuously identify the handle points' locations. With DragGAN, users gain the capability to manipulate images with exceptional precision in directing pixel movements, thereby facilitating a more intuitive and user-centered design process. This approach not only enhances creative possibilities but also empowers users to achieve their desired visual outcomes more effectively.