no code implementations • 17 Apr 2024 • Soumyendu Sarkar, Vineet Gundecha, Sahand Ghorbanpour, Alexander Shmakov, Ashwin Ramesh Babu, Avisek Naug, Alexandre Pichard, Mathieu Cocho
Our results show that the transformer model of moderate depth with gated residual connections around the multi-head attention, multi-layer perceptron, and the transformer block (STrXL) proposed in this paper is optimal and boosts energy efficiency by an average of 22. 1% for these complex spread waves over the existing spring damper (SD) controller.
no code implementations • 13 Sep 2022 • Soumyendu Sarkar, Vineet Gundecha, Sahand Ghorbanpour, Alexander Shmakov, Ashwin Ramesh Babu, Alexandre Pichard, Mathieu Cocho
Recent Wave Energy Converters (WEC) are equipped with multiple legs and generators to maximize energy generation.
Multi-agent Reinforcement Learning reinforcement-learning +1