Instytut Podstawowych Problemów Techniki
Polskiej Akademii Nauk

Pracownicy

mgr Maciej Pióro

Zakład Technologii Inteligentnych (ZTI)
doktorant
telefon: (+48) 22 826 12 81 wewn.: 168
pokój: 411
e-mail:

Prace konferencyjne
1.  Antoniak S., Krutul M., Pióro M., Krajewski J., Ludziejewski J., Ciebiera K., Król K., Odrzygóźdź T., Cygan M., Jaszczur S., Mixture of Tokens: Continuous MoE through Cross-Example Aggregation, NeurIPS, The Thirty-Eighth Annual Conference on Neural Information Processing Systems, 2024-12-10/12-15, Vancouver (CA), pp.1, 2024

Streszczenie:
Mixture of Experts (MoE) models based on Transformer architecture are pushing the boundaries of language and vision tasks. The allure of these models lies in their ability to substantially increase the parameter count without a corresponding increase in FLOPs. Most widely adopted MoE models are discontinuous with respect to their parameters - often referred to as sparse. At the same time, existing continuous MoE designs either lag behind their sparse counterparts or are incompatible with autoregressive decoding. Motivated by the observation that the adaptation of fully continuous methods has been an overarching trend in Deep Learning, we develop Mixture of Tokens (MoT), a simple, continuous architecture that is capable of scaling the number of parameters similarly to sparse MoE models. Unlike conventional methods, MoT assigns mixtures of tokens from different examples to each expert. This architecture is fully compatible with autoregressive training and generation. Our best models not only achieve a 3× increase in training speed over dense Transformer models in language pretraining but also match the performance of state-of-the-art MoE architectures. Additionally, a close connection between MoT and MoE is demonstrated through a novel technique we call transition tuning.

Afiliacje autorów:
Antoniak S. - inna afiliacja
Krutul M. - inna afiliacja
Pióro M. - IPPT PAN
Krajewski J. - inna afiliacja
Ludziejewski J. - inna afiliacja
Ciebiera K. - inna afiliacja
Król K. - inna afiliacja
Odrzygóźdź T. - inna afiliacja
Cygan M. - inna afiliacja
Jaszczur S. - inna afiliacja
2.  Pióro M., Wołczyk M., Pascanu R., Von Oswald J., Sacramento J., State soup: in-context skill learning, retrieval and mixing, Next Generation of Sequence Modeling Architectures Workshop at International Conference on Machine Learning 2024, 2024-07-26/07-26, Wiedeń (AT), pp.1-4, 2024

Streszczenie:
A new breed of gated-linear recurrent neural networks has reached state-of-the-art performance on a range of sequence modeling problems. Such models naturally handle long sequences efficiently, as the cost of processing a new input is independent of sequence length. Here, we explore another advantage of these stateful sequence models, inspired by the success of model merging through parameter interpolation. Building on parallels between fine-tuning and in-context learning, we investigate whether we can treat internal states as task vectors that can be stored, retrieved, and then linearly combined, exploiting the linearity of recurrence. We study this form of fast model merging on Mamba-2.8b, a pretrained recurrent model, and present preliminary evidence that simple linear state interpolation methods suffice to improve next-token perplexity as well as downstream in-context learning task performance.

Afiliacje autorów:
Pióro M. - IPPT PAN
Wołczyk M. - inna afiliacja
Pascanu R. - inna afiliacja
Von Oswald J. - inna afiliacja
Sacramento J. - inna afiliacja
3.  Pióro M., Ciebiera K., Król K., Ludziejewski J., Krutul M., Krajewski J., Antoniak S., Miłoś P., Cygan M., Jaszczur S., MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts, Next Generation of Sequence Modeling Architectures Workshop at International Conference on Machine Learning 2024, 2024-07-26/07-26, Wiedeń (AT), pp.1-4, 2024

Streszczenie:
State Space Models (SSMs) have become serious contenders in the field of sequential modeling, challenging the dominance of Transformers. At the same time, Mixture of Experts (MoE) has significantly improved Transformer-based Large Language Models, including recent state-of-the-art open models. We propose that to unlock the potential of SSMs for scaling, they should be combined with MoE. We showcase this on Mamba, a recent SSM-based model that achieves remarkable performance. Our model, MoE-Mamba, outperforms Mamba and matches the performance of Transformer-MoE. In particular, MoE-Mamba reaches the same performance as Mamba in 2.35x fewer training steps while preserving the inference performance gains of Mamba against Transformer.

Afiliacje autorów:
Pióro M. - IPPT PAN
Ciebiera K. - inna afiliacja
Król K. - inna afiliacja
Ludziejewski J. - inna afiliacja
Krutul M. - inna afiliacja
Krajewski J. - inna afiliacja
Antoniak S. - inna afiliacja
Miłoś P. - inna afiliacja
Cygan M. - inna afiliacja
Jaszczur S. - inna afiliacja
4.  Ludziejewski J., Krajewski J., Adamczewski K., Pióro M., Krutul M., Antoniak S., Ciebiera K., Król K., Odrzygoźdź T., Sankowski P., Cygan M., Jaszczur S., Scaling Laws for Fine-Grained Mixture of Experts, ICML, The Forty-First International Conference on Machine Learning, 2024-07-21/07-27, Wiedeń (AT), pp.33270-33288, 2024

Streszczenie:
Mixture of Experts (MoE) models have emerged as a primary solution for reducing the computational cost of Large Language Models. In this work, we analyze their scaling properties, highlighting certain arbitrary assumptions present in the existing literature. In particular, we introduce a new hyperparameter, granularity, the modification of which allows for the optimal adjustment of the size of experts. Subsequently, we present scaling laws for fine-grained MoE, taking into account the number of training tokens, model size, and granularity. Using these scaling laws, we derive the optimal training configuration for a given computational budget. Furthermore, in contrast with previous works, we demonstrate that the gap in efficiency between dense and MoE models grows as we scale up the model size and training budget.

Afiliacje autorów:
Ludziejewski J. - inna afiliacja
Krajewski J. - inna afiliacja
Adamczewski K. - inna afiliacja
Pióro M. - IPPT PAN
Krutul M. - inna afiliacja
Antoniak S. - inna afiliacja
Ciebiera K. - inna afiliacja
Król K. - inna afiliacja
Odrzygoźdź T. - inna afiliacja
Sankowski P. - inna afiliacja
Cygan M. - inna afiliacja
Jaszczur S. - inna afiliacja

Kategoria A Plus

IPPT PAN

logo ippt            ul. Pawińskiego 5B, 02-106 Warszawa
  +48 22 826 12 81 (centrala)
  +48 22 826 98 15
 

Znajdź nas

mapka
© Instytut Podstawowych Problemów Techniki Polskiej Akademii Nauk 2024