Welcome to Distributed Optimisation

Description:

In this course, we will discuss  several distributed optimization techniques. We group them into two categories.

The first  category consists of the methods leaning on augmented Lagrangian decomposition. These include Dual Decomposition, the Alternating Direction Method of

Multipliers with Proximal Message Passing, Analytical Target

Cascading, and the Auxiliary Problem Principle. The second category addresses decentralized solution of the Karush-Kuhn-Tucker (KKT)-  necessary conditions for

local optimality. These include Optimality Condition

Decomposition and Consensus+Innovation. 

Prerequisites: Linear algebra or matrix calculus  


Organizer: Professor Rafael Wisniewski

Lecturers: Prof. Richard Heusdens (Delft University of Technology), Prof. Mads Græsbøll Christensen, Prof. Rafael Wisniewski and Associate Prof. John Leth 

ECTS: 3

Time: November 5 to November 9, 2018

Place: 

City: 

Number of seats: 40

Deadline: October 15, 2018

Important information concerning PhD courses
 We have over some time experienced problems with no-show for both project and general courses. It has now reached a point where we are forced to take action. Therefore, the Doctoral School has decided to introduce a no-show fee of DKK 5,000 for each course where the student does not show up. Cancellations are accepted no later than 2 weeks before start of the course. Registered illness is of course an acceptable reason for not showing up on those days. Furthermore, all courses open for registration approximately three months before start. This can hopefully also provide new students a chance to register for courses during the year. We look forward to your registrations

Contents:

Day 1: Introduction to optimisation by John Leth <jjl@es.aau.dk>

Welcome to Physical Human-Robot Interaction - Force Estimation & Control

Description:

Humans and robots are beginning to collaborate physically to solve complex tasks. That could be assembly tasks in unstructured environments, where the robot takes care of the heavy lifting, while the operator controls the movement of it. For a robot to facilitate such an operation the external forces applied by the operator must be translated to a trajectory for the robot to follow, which entails a control system for ensuring the trajectory is followed.

 

The course has three main parts consisting of modeling of robot manipulators, force control of robot manipulators, and human operator intend estimation.

 

The course provides an introduction to kinematic and dynamic modeling of robot manipulators, where going from a CAD file in SolidWorks to a dynamic simulation model in MATLAB will be shown. This is followed by the fundamentals in direct and indirect force control covering inverse dynamic control, impedance- and admittance control. Lastly, we cover the intend estimation, which is closely coupled to the more general field of nonlinear disturbance observers for Euler-Lagrangian systems. Throughout the course, examples and exercises will be carried out on a robot manipulator simulation model implemented in MATLAB Simulink Simmechanics.

Prerequisites:

A basic knowledge of mathematics as obtained through undergraduate engineering studies. Installation of SolidWorks (Student version is fine) and MATLAB 2016b or later.



Organizer: Associate Professor John Leth, Department of Electronic Systems


Lecturers: Associate Professor John Leth, Department of Electronic Systems and PostDoc Rasmus Pedersen, Department of Electronic Systems


ECTS: 3

Time: August 13 to August 17, 2018


Place: 

City: 

Number of seats: 50

Deadline: July 23, 2018


Important information concerning PhD courses We have over some time experienced problems with no-show for both project and general courses. It has now reached a point where we are forced to take action. Therefore, the Doctoral School has decided to introduce a no-show fee of DKK 5,000 for each course where the student does not show up. Cancellations are accepted no later than 2 weeks before start of the course. Registered illness is of course an acceptable reason for not showing up on those days. Furthermore, all courses open for registration approximately three months before start. This can hopefully also provide new students a chance to register for courses during the year. We look forward to your registrations

Welcome to Advanced Topics in Acoustic Array Signal Processing

Description:

Acoustic arrays are becoming a ubiquitous technology in many places, including in consumer electronics and healthcare technology. Microphone arrays are now found in smartphones, laptops, TVs, etc., and loudspeaker arrays are emerging as a promising technology in home entertainment systems, car audio systems, public announcement systems. Moreover, as wireless communication capabilities are becoming widespread, audio devices can now form ad hoc networks and cooperate when solving signal processing problems, such as estimation and filtering. This offers many new possibilities but also poses many new challenges, as it requires that many difficult, technical problems must be solved. In the course, a general introduction to acoustic array signal processing will be given, including commonly used models and assumptions as well as classical methods for solving problems such as localization, beamforming and noise reduction. The remainder of the course is then devoted to recent advances in acoustic array signal processing and applications. These include advances within, for example, model-based localization and beamforming, sound zone control with loudspeaker arrays, multi-channel noise reduction in ad hoc microphone arrays, noise statistics estimation, speech intelligibility prediction, and speech enhancement in binaural hearing aids.

The course is dedicated to the following subjects:

  • Fundamentals: Definitions, narrow-band signals, near-filed and far-field, array manifold vector. Beamforming, uniform linear array, directivity pattern. Performance criteria (beam-width, sidelobe level, directivity, white noise gain). Sensitivity. Sampling of continuous aperture. Wide-band signals and nested arrays.
  • Space-time random processes: Snapshots, spatial correlation matrix, signal and noise subspaces.
  • Optimal array processors: MVDR (Capon), MPDR, Maximum SNR, MMSE, LCMV.
  • Sensitivity and robustness: Noise fields and multi-path and their influence on performance. Superdirective beamformer. Diagonal loading.
  • Adaptive spatial filtering: Frost method, generalized sidelobe canceller (GSC).
  • Parameter estimation (DoA): ML estimation, resolution, Cramér-Rao lower bound.
  • Classical methods for localization: Classical methods (Bartlett), method based on eigen-decomposition: Pisarenko, MUSIC, ESPRIT. Resolution. MVDR estimation. Performance evaluation and comparison.
  • Advances: Model-based processing and estimation, multi-channel noise reduction, ad hoc microphone arrays.
  • Applications: Speech processing, hearing aids, wireless acoustic sensor networks, loudspeaker arrays.

Prerequisites:
A basic knowledge of mathematics as obtained through undergraduate engineering studies.


Organizer: Prof. Mads Græsbøll Christensen

Lecturers: Sharon Gannot, Assistant Prof. Jesper Rindom Jensen, Assistant Prof. Jesper Kjær Nielsen, Prof. Mads Græsbøll Christensen


ECTS: 4

Time: August 13-17 2018

Place: 

City: 

Number of seats: 25

Deadline: July 23 2018

Important information concerning PhD courses 
We have over some time experienced problems with no-show for both project and general courses. It has now reached a point where we are forced to take action. Therefore, the Doctoral School has decided to introduce a no-show fee of DKK 5,000 for each course where the student does not show up. Cancellations are accepted no later than 2 weeks before start of the course. Registered illness is of course an acceptable reason for not showing up on those days. Furthermore, all courses open for registration approximately three months before start. This can hopefully also provide new students a chance to register for courses during the year. We look forward to your registrations

Welcome to Linear Matrix Inequalities in Control

Description:

In the space of the last two decades or so, Linear Matrix Inequalities (LMIs) have become a de facto standard tool in numerical analysis and design in Control Engineering. Many standard problems, such as stability, robustness, performance and state feedback design, are naturally formulated as LMIs. The advantage of formulating a controller design problem with LMIs is that additional constraints can easily be added. As an example, a linear quadratic regulator (LQR) can be designed subject to constraints on the domain of the closed-loop poles. In addition, various efficient numerical toolboxes have been developed over the years, permitting straightforward usage of LMIs in practical settings.

The theory is applied to practical examples to give hands on experience with the theory.

 

This course highlights a number of different usages of LMIs for solving various control-theoretic problems, covering the following main subjects:

- Convex optimization and toolboxes

- Dissipative systems

- Stability analysis

- Parametric uncertainties

- Feedback design

Prerequisites: Robust Control (8th sem. Automation and Control) or similar


Organizer: Jan Bendtsen

Lecturers: John Leth and Jan Bendtsen

ECTS: 3

Time: May 28 to June 1 2018

Place: 

City: 

Number of seats: 40

Deadline: 7 May 2018

Important information concerning PhD courses 
We have over some time experienced problems with no-show for both project and general courses. It has now reached a point where we are forced to take action. Therefore, the Doctoral School has decided to introduce a no-show fee of DKK 5,000 for each course where the student does not show up. Cancellations are accepted no later than 2 weeks before start of the course. Registered illness is of course an acceptable reason for not showing up on those days. Furthermore, all courses open for registration approximately three months before start. This can hopefully also provide new students a chance to register for courses during the year. We look forward to your registrations

Welcome to Deep Learning


Description:

Deep learning is a newly emerged area of research in machine learning and has recently shown huge success in a variety of areas. The impact on many applications is revolutionary, which ignites intensive studies of this topic.

 

During the past few decades, the prevalent machine learning methods, including support vector machines, conditional random fields, hidden Markov models, and one-hidden-layer multi-layer perceptron, have found a broad range of applications. While being effective in solving simple or well-constrained problems, these methods have one drawback in common, namely they all have shallow architectures. They in general have no more than one or two layers of nonlinear feature transformations, which limits their performance on many real-world applications.

 

On the contrary, the human brain and its cognitive process, being far more complicated, have deep architectures that are organized into many hierarchical layers. The information gets more abstract while going up along the hierarchy. Interests in using deep architectures were reignited in 2006 when a deep belief network was shown to be trained well. Since then deep learning methods and applications have witnessed unprecedented success.

 

This course will give an introduction to deep learning both by presenting valuable methods and by addressing specific applications. This course covers both theory and practices for deep learning. Topics will include

  • Machine learning fundamentals
  • Deep learning concepts
  • Deep learning methods including deep autoencoders, deep neural networks, recurrent neural networks, long short-term memory recurrent networks, convolutional neural networks, and generative adversarial networks.
  • Selected applications of deep learning

 

Literature:

 

Ian Goodfellow, Yoshua Bengio and Aaron Courville, Deep Learning, The MIT Press, 2016.

Li Deng and Dong Yu, Deep Learning: Methods and Applications, Now publishing, 2014.

Prerequisites:  Basic probability and statistics theory, linear algebra and machine learning.

Organizer: Professor Zheng-Hua Tan

Lecturers: Zheng-Hua Tan, Professor, Aalborg University, Denmark, zt@es.aau.dkhttp://kom.aau.dk/~zt/


ECTS: 1

Time: 26-27 March 2018


Place: Aalborg University

City: 

Number of seats: 50

Deadline: 5 March 2018

Important information concerning PhD courses 
We have over some time experienced problems with no-show for both project and general courses. It has now reached a point where we are forced to take action. Therefore, the Doctoral School has decided to introduce a no-show fee of DKK 5,000 for each course where the student does not show up. Cancellations are accepted no later than 2 weeks before start of the course. Registered illness is of course an acceptable reason for not showing up on those days. Furthermore, all courses open for registration approximately three months before start. This can hopefully also provide new students a chance to register for courses during the year. We look forward to your registrations