Ghulam Mujtaba

Assistant Professor at Regis University

3333 Regis Blvd, Denver, CO, 80221, USA
Ghulam Mujtaba

About Dr. Ghulam Mujtaba

Dr. Ghulam Mujtaba serves as Assistant Professor at Regis University in Denver Colorado specializing in Computer Vision Multimedia Communications and Deep Learning. His research develops lightweight client-driven AI frameworks bridging cutting-edge research with practical applications serving 1000+ concurrent users across production systems.

Dr. Mujtaba earned his Ph.D. in Computer Engineering from Gachon University in Seoul South Korea under Prof. Eun-Seok Ryu. His research achievements include 20+ publications plus US patent in personalized video summarization with successful Disney+ Big Bet series collaboration contributing to multi-million dollar entertainment productions.

Dr. Mujtaba develops innovative educational technology platforms including RegisPortfolio.com that can serve 500+ students concurrently while delivering keynote presentations at 5+ international conferences across multiple continents. He welcomes collaborations with researchers industry professionals and students advancing AI technology through joint research projects industry partnerships and innovative computer vision applications.

Have Questions About My Research?

Chat with my AI assistant to explore publications, courses, or technical topics

Chat with AI Assistant

Recent News

Selected Publications

  1. EdgeVidSum: Real-Time Personalized Video Summarization at the Edge
    CVPR '25
    EdgeVidSum: Real-Time Personalized Video Summarization at the Edge

    Mujtaba, Ghulam and Ryu, Eun-Seok

    Computer Vision and Pattern Recognition(CVPR)-Demo , 2025

    EdgeVidSum is a lightweight framework designed to generate personalized summaries of long-form videos directly on edge devices. The proposed approach enables real-time video summarization while safeguarding user privacy through local data processing using innovative thumbnail-based techniques and efficient neural architectures. Our interactive demo highlights the system’s capability to create tailored video summaries for long-form videos like movies, sports events, and TV shows based on individual user preferences. The entire computations occur seamlessly on resource-constrained devices like Jetson Nano.
  2. EdgeAIGuard: Agentic LLMs for Minor Protection in Digital Spaces
    IEEE IoT '25
    EdgeAIGuard: Agentic LLMs for Minor Protection in Digital Spaces

    Mujtaba, Ghulam and Khowaja, Sunder Ali and Dev, Kapal

    IEEE Internet of Things Journal , 2025

    DOI HTML PDF
    Social media has become integral to minors’ daily lives and is used for various purposes, such as making friends, exploring shared interests, and engaging in educational activities. However, the increase in screen time has also led to heightened challenges, including cyberbullying, online grooming, and exploitations posed by malicious actors. Traditional content moderation techniques have proven ineffective against exploiters’ evolving tactics. To address these growing challenges, we propose the EdgeAIGuard content moderation approach that is designed to protect minors from online grooming and various forms of digital exploitation. The proposed method comprises a multi-agent architecture deployed strategically at the network edge to enable rapid detection with low latency and prevent harmful content targeting minors. The experimental results show the proposed method is significantly more effective than the existing approaches.
  3. FRC-GIF: Frame Ranking-based Personalized Artistic Media Generation Method for Resource Constrained Devices
    IEEE ToBD '23
    FRC-GIF: Frame Ranking-based Personalized Artistic Media Generation Method for Resource Constrained Devices

    Mujtaba, Ghulam and Ali Khowaja, Sunder and Aslam Jarwar, Muhammad and Choi, Jaehyuk and Ryu, Eun-Seok

    IEEE Transactions on Big Data , 2023

    Generating video highlights in the form of animated graphics interchange formats (GIFs) has significantly simplified the process of video browsing. Animated GIFs have paved the way for applications concerning streaming platforms and emerging technologies. Existing studies have led to large computational complexity without considering user personalization. This paper proposes lightweight method to attract users and increase views of videos through personalized artistic media, i.e., static thumbnails and animated GIF generation. The proposed method analyzes lightweight thumbnail containers (LTC) using the computational resources of the client device to recognize personalized events from feature-length sports videos. Next, the thumbnails are then ranked through the frame rank pooling method for their selection. Subsequently, the proposed method processes small video segments rather than considering the whole video for generating artistic media. This makes our approach more computationally efficient compared to existing methods that use the entire video data; thus, the proposed method complies with sustainable development goals. Furthermore, the proposed method retrieves and uses thumbnail containers and video segments, which reduces the required transmission bandwidth as well as the amount of locally stored data. Experiments reveal that the computational complexity of our method is 3.73 times lower than that of the state-of-the-art method.
  4. Client-driven lightweight method to generate artistic media for feature-length sports videos
    SIGMAP '22
    Client-driven lightweight method to generate artistic media for feature-length sports videos

    Mujtaba, Ghulam and Choi, Jaehyuk and Ryu, Eun-Seok

    SIGMAP: 19th International Conference on Signal Processing and Multimedia Applications , 2022

    This paper proposes a lightweight methodology to attract users and increase views of videos through personalized artistic media i.e., static thumbnails and animated Graphics Interchange Format (GIF) images. The proposed method analyzes lightweight thumbnail containers (LTC) using computational resources of the client device to recognize personalized events from feature-length sports videos. In addition, instead of processing the entire video, small video segments are used in order to generate artistic media. This makes our approach more computationally efficient compared to existing methods that use the entire video data. Further, the proposed method retrieves and uses thumbnail containers and video segments, which reduces the required transmission bandwidth as well as the amount of locally stored data that are used during artistic media generation. After conducting experiments on the NVIDIA Jetson TX2, the computational complexity of our method was 3:78 times lower than that of the state-of-the-art method. To the best of our knowledge, this is the first technique that uses LTC to generate artistic media while providing lightweight and high-performance services on resource-constrained devices.
  5. Client-driven animated gif generation framework using an acoustic feature
    MTAP '21
    Client-driven animated gif generation framework using an acoustic feature

    Mujtaba, Ghulam and Lee, Sangsoon and Kim, Jaehyoun and Ryu, Eun-Seok

    Multimedia Tools and Applications , 2021

    This paper proposes a novel, lightweight method to generate animated graphical interchange format images (GIFs) using the computational resources of a client device. The method analyzes an acoustic feature from the climax section of an audio file to estimate the timestamp corresponding to the maximum pitch. Further, it processes a small video segment to generate the GIF instead of processing the entire video. This makes the proposed method computationally efficient, unlike baseline approaches that use entire videos to create GIFs. The proposed method retrieves and uses the audio file and video segment so that communication and storage efficiencies are improved in the GIF generation process. Experiments on a set of 16 videos show that the proposed approach is 3.76 times more computationally efficient than a baseline method on an Nvidia Jetson TX2. Additionally, in a qualitative evaluation, the GIFs generated using the proposed method received higher overall ratings compared to those generated by the baseline method. To the best of our knowledge, this is the first technique that uses an acoustic feature in the GIF generation process.
Loading...