Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Direct domain adaptation through reciprocal linear transformations (2108.07600v1)

Published 17 Aug 2021 in cs.LG and cs.CV

Abstract: We propose a direct domain adaptation (DDA) approach to enrich the training of supervised neural networks on synthetic data by features from real-world data. The process involves a series of linear operations on the input features to the NN model, whether they are from the source or target domains, as follows: 1) A cross-correlation of the input data (i.e. images) with a randomly picked sample pixel (or pixels) of all images from that domain or the mean of all randomly picked sample pixel (or pixels) of all images. 2) The convolution of the resulting data with the mean of the autocorrelated input images from the other domain. In the training stage, as expected, the input images are from the source domain, and the mean of auto-correlated images are evaluated from the target domain. In the inference/application stage, the input images are from the target domain, and the mean of auto-correlated images are evaluated from the source domain. The proposed method only manipulates the data from the source and target domains and does not explicitly interfere with the training workflow and network architecture. An application that includes training a convolutional neural network on the MNIST dataset and testing the network on the MNIST-M dataset achieves a 70% accuracy on the test data. A principal component analysis (PCA), as well as t-SNE, show that the input features from the source and target domains, after the proposed direct transformations, share similar properties along with the principal components as compared to the original MNIST and MNIST-M input features.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Tariq Alkhalifah (67 papers)
  2. Oleg Ovcharenko (5 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.