Position

656 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Position models and implementations

Most implemented papers

Non-local Neural Networks

facebookresearch/video-nonlocal-net CVPR 2018

Both convolutional and recurrent operations are building blocks that process one local neighborhood at a time.

RoFormer: Enhanced Transformer with Rotary Position Embedding

ZhuiyiTechnology/roformer 20 Apr 2021

Then, we propose a novel method named Rotary Position Embedding(RoPE) to effectively leverage the positional information.

Self-Attention with Relative Position Representations

tensorflow/tensor2tensor NAACL 2018

On the WMT 2014 English-to-German and English-to-French translation tasks, this approach yields improvements of 1. 3 BLEU and 0. 3 BLEU over absolute position representations, respectively.

Dual Attention Network for Scene Segmentation

junfu1115/DANet CVPR 2019

Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively.

3D human pose estimation in video with temporal convolutions and semi-supervised training

facebookresearch/VideoPose3D CVPR 2019

We start with predicted 2D keypoints for unlabeled video, then estimate 3D poses and finally back-project to the input 2D keypoints.

A Transformer-based Approach for Source Code Summarization

wasiahmad/NeuralCodeSum ACL 2020

Generating a readable summary that describes the functionality of a program is known as source code summarization.

The Case for Learned Index Structures

learnedsystems/RMI 4 Dec 2017

Indexes are models: a B-Tree-Index can be seen as a model to map a key to the position of a record within a sorted array, a Hash-Index as a model to map a key to a position of a record within an unsorted array, and a BitMap-Index as a model to indicate if a data record exists or not.

Deep Domain Confusion: Maximizing for Domain Invariance

erlendd/ddan 10 Dec 2014

Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark.

Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation

ofirpress/attention_with_linear_biases ICLR 2022

Since the introduction of the transformer model by Vaswani et al. (2017), a fundamental question has yet to be answered: how does a model achieve extrapolation at inference time for sequences that are longer than it saw during training?

Neural Question Generation from Text: A Preliminary Study

magic282/NQG 6 Apr 2017

Automatic question generation aims to generate questions from a text passage where the generated questions can be answered by certain sub-spans of the given passage.