This tutorial should help you get started with the audioplayers library, covering the basics but guiding you all the way through advanced features. You can also play around with our official example app and explore the code, that showcases every feature the library has to offer.
In order to install this package, add the latest version of audioplayers
to your pubspec.yaml
file.
This package uses the Federated Plugin guidelines to support multiple platforms, so it should just work on all supported platforms your app is built for without any extra configuration.
You do not need to add the audioplayers_*
packages directly.
For building and running for certain platforms you need pay attention to additional steps:
- Linux Setup (
audioplayers_linux
). - Windows Setup (
audioplayers_windows
).
An AudioPlayer
instance can play a single audio at a time (think of it as a single boombox). To create it, simply call the constructor:
final player = AudioPlayer();
You can create as many instances as you wish to play multiple audios simultaneously, or just to more easily control separate sources.
Each AudioPlayer is created empty and has to be configured with an audio source (and it can only have one; changing it will replace the previous source).
The source (cf. packages/audioplayers/lib/src/source.dart) is basically what audio you are playing (a song, sound effect, radio stream, etc), and it can have one of 4 types:
- UrlSource: get the audio from a remote URL from the Internet. This can be a direct link to a supported file to be downloaded, or a radio stream.
- DeviceFileSource: access a file in the user's device, probably selected by a file picker.
- AssetSource: play an asset bundled with your app, by default within the
assets
directory. To customize the prefix, see AudioCache. - BytesSource (only some platforms): pass in the bytes of your audio directly (read it from anywhere).
In order to set the source on your player instance, call setSource
with the appropriate source object:
await player.setSource(AssetSource('sounds/coin.wav'));
Alternatively, call the shortcut method:
await player.setSourceUrl(url); // equivalent to setSource(UrlSource(url));
Or, if you want to set the url and start playing, using the play
shortcut:
await player.play(DeviceFileSource(localFile)); // will immediately start playing
After the URL is set, you can use the following methods to control the player:
Starts playback from current position (by default, from the start).
await player.resume();
Changes the current position (note: this does not affect the "playing" status).
await player.seek(Duration(milliseconds: 1200));
Stops the playback but keeps the current position.
await player.pause();
Stops the playback and also resets the current position.
await player.stop();
Equivalent to calling stop
and then releasing of any resources associated with this player.
This means that memory might be de-allocated, etc.
Note that the player is also in a ready-to-use state; if you call resume
again any necessary resources will be re-fetch.
Particularly on Android, the media player is quite resource-intensive, and this will let it go. Data will be buffered again when needed (if it's a remote file, it will be downloaded again.
Disposes the player. It is calling release
and also closes all open streams. This player instance must not be used anymore!
await player.dispose();
Play is just a shortcut method that allows you to:
- set a source
- configure some player parameters (volume)
- configure audio attributes
- resume (start playing immediately)
All in a single function call. For most simple use cases, it might be the only method you need.
You can also change the following parameters:
Changes the audio volume. Defaults to 1.0
. It can go from 0.0
(mute) to 1.0
(max; some platforms allow bigger than 1), varying linearly.
await player.setVolume(0.5);
Changes stereo balance. Defaults to 0.0
(both channels). 1.0
- right channel only, -1.0
- left channel only.
await player.setBalance(1.0); // right channel only
Changes the playback rate (i.e. the "speed" of playback). Defaults to 1.0
(normal speed). 2.0
would be 2x speed, etc.
await player.setPlaybackRate(0.5); // half speed
The release mode is controlling what happens when the playback ends. There are 3 options:
.stop
: just stops the playback but keep all associated resources..release
(default): releases all resources associated with this player, equivalent to calling therelease
method..loop
: starts over after completion, looping over and over again.
await player.setReleaseMode(ReleaseMode.loop);
Note: you can control exactly what happens when the playback ends using the onPlayerComplete
stream (see Streams below).
Note: there are caveats when looping audio without gaps. Depending on the file format and platform, when audioplayers uses the native implementation of the "looping" feature, there will be gaps between plays, which might not be noticeable for non-continuous SFX but will definitely be noticeable for looping songs. Please check out the Gapless Loop section on our Troubleshooting Guide for more details.
The Player Mode represents what kind of native SDK is used to playback audio, when multiple options are available (currently only relevant for Android). There are 2 options:
.mediaPlayer
(default): for long media files or streams..lowLatency
: for short audio files, since it reduces the impacts on visuals or UI performance.
Note: on low latency mode, these features are NOT available:
- get duration & duration event
- get position & position event
- playback completion event (this means you are responsible for stopping the player)
- seeking & seek completion event
Normally you want to use .mediaPlayer
unless you care about performance and your audios are short (i.e. for sound effects in games).
You can globally control the amount of log messages that are emitted by this package:
AudioLogger.logLevel = AudioLogLevel.info;
You can pick one of 3 options:
.info
: show any log messages, include info/debug messages.error
(default): show only error messages.none
: show no messages at all (not recommended)
Note: before opening any issue, always try changing the log level to .info
to gather any information that might assist you with solving the problem.
Note: despite our best efforts, some native SDK implementations that we use spam a lot of log messages that we currently haven't figured out how to conform to this configuration (specially noticeable on Android). If you would like to contribute with a PR, they are more than welcome!
You can also listen for [Log events](#Log event).
An Audio Context is a (mostly mobile-specific) set of secondary, platform-specific aspects of audio playback, typically related to how the act of playing audio interacts with other features of the device. In most cases, you do not need to change this.
The Audio Context configuration can be set globally for all players via:
AudioPlayer.global.setAudioContext(AudioContextConfig(/*...*/).build());
To configure a player specific Audio Context (if desired), use:
player.setAudioContext(AudioContextConfig(/*...*/).build());
Note: As the iOS platform can not handle contexts for each player individually, for convenience this would also set the Audio Context globally.
While each platform has its own set of configurations, they are somewhat related, and you can create them using a unified interface call AudioContextConfig
.
It provides generic abstractions that convey intent, that are then converted to platform specific configurations.
Note that if this process is not perfect, you can create your configuration from scratch by providing exact details for each platform via AudioContextAndroid and AudioContextIOS.
player.setAudioContext(AudioContext(
android: AudioContextAndroid(/*...*/),
iOS: AudioContextIOS(/*...*/),
));
Each player has a variety of streams that can be used to listen to events, state changes, and other useful information coming from the player.
All streams also emit the same native platform errors via the onError
callback.
This event returns the duration of the file, when it's available (it might take a while because it's being downloaded or buffered).
player.onDurationChanged.listen((Duration d) {
print('Max duration: $d');
setState(() => duration = d);
});
This Event updates the current position of the audio. You can use it to make a progress bar, for instance.
player.onPositionChanged.listen((Duration p) => {
print('Current position: $p');
setState(() => position = p);
});
This Event returns the current player state. You can use it to show if player playing, or stopped, or paused.
player.onPlayerStateChanged.listen((PlayerState s) => {
print('Current player state: $s');
setState(() => playerState = s);
});
This Event is called when the audio finishes playing; it's used in the loop method, for instance.
It does not fire when you interrupt the audio with pause or stop.
player.onPlayerComplete.listen((_) {
onComplete();
setState(() {
position = duration;
});
});
This event returns the log messages from the native platform.
The logs are handled by default via Logger.log()
, and errors via Logger.error()
, see Logs.
player.onLog.listen(
(String message) => Logger.log(message),
onError: (Object e, [StackTrace? stackTrace]) => Logger.error(e, stackTrace),
);
Or to handle global logs:
AudioPlayer.global.onLog.listen(
(String message) => Logger.log(message),
onError: (Object e, [StackTrace? stackTrace]) => Logger.error(e, stackTrace),
);
All mentioned events can also be obtained by a combined event stream.
player.eventStream.listen((AudioEvent event) {
print(event.eventType);
});
Or to handle global events:
AudioPlayer.global.eventStream.listen((GlobalAudioEvent event) {
print(event.eventType);
});
Flutter does not provide an easy way to play audio on your local assets, but that's where the AudioCache
class comes into play.
It actually copies the asset to a temporary folder in the device, where it is then played as a Local File.
It works as a cache because it keeps track of the copied files so that you can replay them without delay.
If desired, you can change the AudioCache
per player via the AudioPlayer().audioCache
property or for all players via AudioCache.instance
.
When playing local assets, by default every instance of AudioPlayers uses a shared global instance of AudioCache, that will have a default prefix "/assets" configured, as per Flutter conventions. However, you can easily change that by specifying your own instance of AudioCache with any other (or no) prefix.
Default behavior, presuming that your audio is stored in /assets/audio/my-audio.wav
:
final player = AudioPlayer();
await player.play(AssetSource('audio/my-audio.wav'));
Remove the asset prefix for all players:
AudioCache.instance = AudioCache(prefix: '')
final player = AudioPlayer();
await player.play(AssetSource('assets/audio/my-audio.wav'));
Set a different prefix for only one player (e.g. when using assets from another package):
final player = AudioPlayer();
player.audioCache = AudioCache(prefix: 'packages/OTHER_PACKAGE/assets/')
await player.play(AssetSource('other-package-audio.wav'));
By default, each time you initialize a new instance of AudioPlayer, a unique playerId is generated and assigned to it using the uuid package. This is used internally to route messages between multiple players, and it allows you to control multiple audios at the same time. If you want to specify the playerId, you can do so when creating the playing:
final player = AudioPlayer(playerId: 'my_unique_playerId');
Two players with the same id will point to the same media player on the native side.
By default, the position stream is updated on every new frame. You can change this behavior to e.g. update on a certain
interval with the TimerPositionUpdater
or implement your own PositionUpdater
:
player.positionUpdater = TimerPositionUpdater(
interval: const Duration(milliseconds: 100),
getPosition: player.getCurrentPosition,
);