Multimodality face animation model

Audio-vision modaility model could improve the quality of face tracking (in speech) as well the robustness (when face get occluded) over vision based solutions. This is my reading note on Audio-vision modaility face tracking.

Read More

How to Install DSM 6.x on Windows Virtual Box

Please follow the steps below to install virtual box on your windows machine:

  • Install virtualbox (recommended Version 5.2.16). 6.1 doesn’t work for me.
  • Create the physic disk for virtual machine. Open powershell in administrator mode
    • Use wmic diskdrive list brief to list all your harddrive
    • Put the disk offline
    • enable write for raw disk access in vm
      Select disk 0
  • Create the raw disk for VM via “C:\Program Files\Oracle\VirtualBox\VBoxManage.exe” internalcommands createrawvmdk -filename 0.vmdk -rawdisk \\.\PhysicalDrive0
  • Download the loader and convert img to vdi via “C:\Program Files\Oracle\VirtualBox\VBoxManage.exe” convertfromraw –format VDI .\synoboot.img .\synoboot.vdi
  • Download the firmware from Synology. Note newer version, e.g., 6.2.1 and 6.2.2 doesn’t work for me (cannot be found in assistant later)
  • Open the virtual box in administrator mode
  • Create the VM as following
  • Start the VM and find it in the assistant. Use F12 to select the boot drive and for boot options both bare-metal and vm options should work
  • Install
Read More


This my reading note on Zero-Shot Text-to-Image Generation (aka, DALL-E), its extension Hierarchical Text-Conditional Image Generation with CLIP Latents (aka, DALLE-2 or unCLIP) and StoryDALL-E: Adapting Pretrained Text-to-Image Transformers for Story Continuation. DALL-E is a transformer generating image given captions, by autoregressively modeling the text and image tokens as a single stream of data. StoryDALL-E extends DALL-E by generating a sequence of images for a sequence of caption to complete a story.

Read More



Read More

Pix2seq A Language Modeling Framework for Object Detection

Pix2seq: A Language Modeling Framework for Object Detection casts object detection as a language modeling task conditioned on the observed pixel inputs. Object descriptions (e.g., bounding boxes and class labels) are expressed as sequences of discrete tokens, and we train a neural network to perceive the image and generate the desired sequence. Our approach is based mainly on the intuition that if a neural network knows about where and what the objects are, we just need to teach it how to read them out. Experiment results are shown in Table 1, which indicates Pix2seq achieves state of art result on coco.

Read More

DreamBooth Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation

This is my reading note on DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation. Given as input just a few (3~5) images of a subject, DreamBooth fine-tune a pretrained text-to-image model such that it learns to bind a unique identifier with that specific subject. Once the subject is embedded in the output domain of the model, the unique identifier can then be used to synthesize fully-novel photorealistic images of the subject contextualized in different scenes. By leveraging the semantic prior embedded in the model with a new autogenous class-specific prior preservation loss, DreamBooth enables synthesizing the subject in diverse scenes, poses, views, and lighting conditions that do not appear in the reference images. (check Figure 1 as an example)

Read More

HeadNeRF A Real-time NeRF-based Parametric Head Model

HeadNeRF: A Real-time NeRF-based Parametric Head Model provides a parametric head model which could generates photorealistic face images conditioned on identity, expression, head pose and appearance (lighting). Compared with traditional mesh and texture, it provides higher fidelity, inherently differetiable and doesn’t required a 3D dataset; compared with GAN, it provides rendering at different head pose with accurate 3D information. This is achived with NeRF. In addition, it could render in real time (5ms) with a model GPU.

Read More

CLIP Learning Transferable Visual Models From Natural Language Supervision

This my reading note on Learning Transferable Visual Models From Natural Language Supervision. The proposed method is called Contrastive Language-Image Pre-training or CLIP. State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. We demonstrate that the simple pre-training task of predicting which caption (freeform text instead of strict labeling) goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks.

Read More

Rotation in 3D

This is my note on rotation in 3D space. There are many different ways of representating the rotation in 3D space, e.g., 3x3 rotation matrix, Euler angle (pitch, yaw and roll), Rodrigues axis-angle representation and quanterion. The relationship and conversion between those representation will be described as below. You could also use scipy.spatial.transform.Rotation to convert between methods.

Read More