# Some very simple task based parallelism. # Adding to the simplicity is that none of the tasks # interact with each other in any way. defmodule Tsk do @moduledoc """ A module wrapper for the tasks. As near as I can read tasks are started by executing a function that must be in a module """ @doc """ Compute the sum of some number tail-recursively. This function behaves well until you go from 1000000000 to 2000000000. Before that, the function scaled linearly. On this just it is MUCH worse than linear. What happens? Is it just handling of big integers? Serial testing with three reps: 1000000000 elixir task.ex 9.50s user 0.16s system 96% cpu 9.968 total 2000000000 elixir task.ex 112.62s user 0.54s system 98% cpu 1:54.35 total Parallel testing with unlimited threads (on my mac -- 8 cores) and doing the task 10 times elixir task.ex 31.06s user 0.24s system 588% cpu 5.316 total """ def trsumm(0, res, sv) do IO.puts("#{sv} #{res}") res end def trsumm(v, res,sv) do trsumm(v-1, res+1, sv) end @doc """ Test using threads. Use Enum.map to start threads """ def main() do #max_concurrency = System.schedulers_online() * 2 max_concurrency = 4 #IO.puts "Max con #{max_concurrency}" # Note that this line just sets up the stream, it does not actually start stream = Task.async_stream(1..24, fn sv -> Tsk.trsumm(1000000000,0,sv) end , max_concurrency: max_concurrency, timeout: :infinity) # run and wait for completion Stream.run(stream) IO.puts("DONE") end end Tsk.main()