Activation Functions

General • 73 methods

Activation functions are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.

Subcategories

Method Year Papers
2000 9345
2016 7902
2000 6198
2000 5616
2014 1164
2016 610
2017 323
2000 236
2017 225
2019 208
2015 97
2017 77
2019 73
2020 53
2013 48
2023 39
2015 36
2000 32
2017 19
2017 13
2000 12
2018 11
2021 9
2016 9
2020 7
2017 7
2020 6
2020 5
2015 5
2000 5
2020 4
2015 4
2020 3
2015 3
2021 3
2019 3
2023 2
2015 2
1994 2
2019 2
2019 2
2022 2
2019 2
2021 2
2020 2
2016 2
2018 2
2019 1
2023 1
2018 1
2023 1
2021 1
2020 1
2000 1
2023 1
2021 1
2022 1
2023 1
2023 1
2000 1
2021 1
2000 1
2018 1
2020 1
2020 1
2018 1
2023 1
2022 1
2022 1
1998 0
2020 0
2000 0