>>217179There's an actual open source model for llm+video+audio+voiceover, problem is the VRAM required which is minimum 32Gb.
But the good thing is, it can run through multiple GPUs through VRAM pooling (I don't know how they did it because Nvidia doesn't support SLI anymore), that means 2 $350 RTX4060 cards can beat 1 $1500 RTX4090.
Hell, if it works with ROCm too, you can just buy 2 $300 RX7600.
$600-$700 minimum on GPUs to have an initial goonmachine capable of running an almost General Purpose AI.