• Keine Ergebnisse gefunden

Proceedings of the OAGM&ARW Joint Workshop 2016 DOI: 10.3217/978-3-85125-528-7-01 15

N/A
N/A
Protected

Academic year: 2022

Aktie "Proceedings of the OAGM&ARW Joint Workshop 2016 DOI: 10.3217/978-3-85125-528-7-01 15"

Copied!
1
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

One­Shot Learning of Scene Categories via Feature Trajectory Transfer

Roland Kwitt

1

, Sebastian Hegenbart

1

, Marc Niethammer

2

University of Salzburg, Austria;

University of North Carolina, Chapel Hill, NC, USA

Abstract

The appearance of (outdoor) scenes changes considerably with the strength of certain transient attributes, such as ``rainy'', ``dark'' or ``sunny''. Obviously, this also affects the representation of an image in feature space, e.g., as activations at a certain CNN layer, and consequently impacts scene   recognition   performance.   In   this   work,   we   investigate   the   variability   in   these   transient attributes as a rich source of information for studying how image representations change as a function of attribute strength. In particular, we leverage a recently introduced dataset with fine­

grain annotations to estimate feature trajectories for a collection of transient attributes and then show how these trajectories can be transferred to new image representations. This enables us to synthesize new data along the transferred trajectories with respect to the dimensions of the space spanned by the transient attributes. Applicability of this concept is demonstrated on the problem of one­shot   scene   recognition.   We   show   that   data   synthesized   via   feature   trajectory   transfer considerably boosts recognition performance, (1) with respect to baselines and (2) in combination with state­of­the­art approaches in one­shot learning.

Proceedings of the OAGM&ARW Joint Workshop 2016 DOI: 10.3217/978-3-85125-528-7-01

15

Referenzen

ÄHNLICHE DOKUMENTE

The OAGM and ARW Joint Workshop on Computer Vision and Robotics provides a platform bring- ing together researchers, students, professionals and practitioners from both

Poly-disperse particle solutions on the other hand show a high variation in the particle intensity because the reflected light intensity depends on the particle size and thus it

This paper introduces a concept to capture spatial context between labeled regions for diverse datasets annotated at different semantic granularity, referred to as Explicit

The method, based on a deep convolutional neural network, discriminates between tattoo and non-tattoo image patches, and it can be used to produce a mask of tattoo candidate

Note that this allows an accurate evaluation of our approach since the true 3-D LV shape is exactly known from CT. Evaluation based on the three in-vivo angiograms is performed

We replace the image estimation component with a robust modification of Richardson-Lucy deconvolution, and introduce a robust data term into the point-spread function estimation..

Experiments confirm that our method reduces surface oscillations of the mesh while preventing degeneration of the triangulation as indicated by mesh quality

The basis for the bottom-up segmentation process is a 6DOF model pose that results from segment- based object recognition and pose estimation.. In contrast to the trivial