made using Leaflet

Torque Planning

Torque Tracker

Announcement Blog

Code

Torque Tracker

Announcement Blog

Code

@increasing.bsky.social/torque-tracker
old school music tracker
https://tangled.sh/did:plc:54jgbo4psy24qu2bk4njtpc4/torque-tracker
torque-tracker - crates.io: Rust Package Registry
Old school music tracker, reimplementation of schism tracker
https://crates.io/crates/torque-tracker
GitHub - luca3s/torque-tracker: old school music tracker
old school music tracker. Contribute to luca3s/torque-tracker development by creating an account on GitHub.
https://github.com/luca3s/torque-tracker

UI

accessability

File loading & saving

similar to the draw API

probably need to make this sans io to make it work both on sync desktop and on async embedded that doesn't even has Read trait. shit

completely static indices

write help page (with de

Audio Settings, lot of cpal integration so not fun (new cpal version soon)

what to do about header elements?

i don't want to do this for audio files. maybe just limit to wave

what to do about playback position

Audio processing

maybe update only on pattern change

better audio interpolation (exponential, even higher order)

make this configurable in the UI

maybe add some draw invalidation

figure out which panning algorithm to use

Draw invalidation

needs to be done per page otherwise i need interior mutability

also render that somehow

push the audio data out of the audio thread

pattern page less than two invalidations or movement

header only drawing or even only this

more detailed playback status

at least between page and header. this doesn't need cooperation from the pages/header

F5 page i guess

run buffer fill if the output has enough room for a tick otherwise iter. basically decide each tick what to do.

investigate how many frames a tick has and how big the out buffer is

Cleanup ToDos

Make Dialog an enum instead of Box<dyn>

maybe always tick and use a inbetween buffer before writing to output. need to always know how big that buffer has to be, maybe hard/impossible

Get rid of smol by getting embassy to work on a OS. i only need timers anyway. This will make porting to embedded a lot easier.

Can embassy run on two cores/threads?

effects

revive the list of every one and its scope

Make it ready for embedded

group them by scope

audio backend no_std (should be easy) why the fuck is Read not in core????

are there good groups so that i can create effectstate enums and place them in different spots in the playback depending on the scope

create Platform abstractions for audio stream, rendering, input, files, event creation, background tasks, ...

keep the structure of the winit app trait

figure out what information each scope needs, how often it gets called, what it can change (return value or &mut ref)

look into externally implementable items when a trait isn't the right thing. https://github.com/rust-lang/rust/issues/125418

all part of the scope

Effect-Scopes

Effect Scopes


inside audio rendering, on every frame

Effect Scopes


inside audio rendering, on every frame

for example: lfo, pitch shift, vol shift, pan shift

&mut ref to the audio processing?

oneshot on audio rendering creation

example: volume, pan setting

gets to edit the newly created channel once isn't stored

once per tick (global?)

renderer matches on them

once per tick (local?)

same as inside audio rendering just triggered less often

need to keep track of how big i make a channel.


Some stuff can be a generic that is local to the app (like audio streams), while some (like creating events) need to be available from multiple tasks

How to design the API for the background tasks? I probably need to define the tasks statically somehow, so embassy can use them.

take a look at bungee

https://github.com/bungee-audio-stretch/bungee

create input abstractions. Probably only store stuff i actually query / the microcontroller can actually provide

https://bsky.app/profile/piss.beauty/post/3lym72e2bcs2m

seems to allow realtime pitch and speed shifting

https://github.com/audacity/audacity/tree/e4bc052201eb0e6e22956cb6426e88661713c6d6/libraries/lib-time-and-pitch/StaffPad

Maybe make/keep mouse support optional. don't think the hardware would support mouse

maybe create a software keyboard input

same goal, maybe compare

take a look at a realtime allocator

https://github.com/yvt/rlsf

made using Leaflet