imoverclocked a day ago

It’s pretty great that despite having large data centers capable of doing this kind of computation, Apple continues to make things work locally. I think there is a lot of value in being able to hold the entirety of a product in hand.

  • coliveira a day ago

    It's very convenient for Apple to do this: less expenses on costly AI chips, and more excuses to ask customers to buy their latest hardware.

    • nine_k a day ago

      Users have to pay for the compute somehow. Maybe by paying for models run in datacenters. Maybe paying for hardware that's capable enough to run models locally.

      • Bootvis a day ago

        I can upgrade to a bigger LLM I use through an API with one click. If it runs on my device device I need to buy a new phone.

        • nine_k a day ago

          I* can run the model on my device, no matter if I have an internet connection, nor if I have a permission from whoever controls the datacenter. I can run the model against highly private data while being certain that the private data never leaves my device.

          It's a different set of trade-offs.

          * Theoretically; I don't own an iPhone.

      • lostlogin a day ago

        But also: if Apple's way works, it’s incredibly wasteful.

        Server side means shared resources, shared upgrades and shared costs. The privacy aspect matters, but at what cost?

        • shakna a day ago

          Server side means an excuse to not improve model handling everywhere you can, and increasing global power usage by noticable percentage point, at a time when we're approaching "point of no return" with burning out the only planet we can live on.

          The cost, so far, is greater.

          • hu3 a day ago

            > Server side means an excuse to not improve model handling everywhere you can...

            How so if efficiency is key for datacenters to be competitive? If anything it's the other way around.

            • coliveira a day ago

              The previous commenter is right in that server-side companies have little incentive to do less, especially when they're backed by investors money. Client-side AI will be bound by device capabilities and customer investment in new devices.

        • gessha 5 hours ago

          With the wave of enshitiffication that's surrounding everything tech or tech-adjacent, the privacy cost is pretty~ high.

  • v5v3 a day ago

    With no company having a clear lead in everyday ai for the non technical mainstream user, there is only going to be a race to the bottom for subscription and API pricing.

    Local doesn't cost the company anything, and increases the minimum hardware customers need to buy.

b0a04gl a day ago

flows make sense here not just for size but cuz they're fully invertible and deterministic. imagine running same gen on 3 iphones, same output. means apple can kinda ensure same input gives same output across devices, chips, runs. no weird variance or sampling noise. good for caching, testing, user trust all that. fits apple's whole determinism dna and more of predictable gen at scale

  • yorwba a day ago

    Normalizing flows generate samples by starting from Gaussian noise and passing it through a series of invertible transformations. Diffusion models generate samples by starting from Gaussian noise and running it through an inverse diffusion process.

    To get deterministic results, you fix the seed for your pseudorandom number generator and make sure not to execute any operations that produce different results on different hardware. There's no difference between the approaches in that respect.

    • GenerocUsername 6 hours ago

      Agree. I am a image gen laymen, but when I was running stable diffusion in 2022 it seemed like I could get the same image if I used the same seed and parameters. Seemed easy to get same image when you have full control of the inputs. The randomness is a choice

godelski 10 hours ago

As far as I'm aware, this is the largest Normalizing Flow that exists, and I think they undermined their work by not mentioning this...

Their ImageNet model (4_1024_8_8_0.05[0]) is ~820M while AFHQ is ~472M. Prior to that there is DenseFlow[1] and MaCow[2], which are both <200M parameters. For more comparison, that makes DenseFlow and MaCow smaller than iDDPM[3] (270M params) and ADM[4] (553M for 256 unconditional). And now, it isn't uncommon for modern diffusion models to have several billion parameters![5] (from this we get some numbers on ImageNet-256, which allows a direct comparison, making TarFlow closer to MaskDiT/2 and much smaller than SimpleDiffusion and VDM++, both of which are in billions. But note that this is 128 vs 256!)

Essentially, the argument here is that you can scale (Composable) Normalizing Flows just as well as diffusion models. There's a lot of extra benefits you get too in the latent space, but that's a much longer discussion. Honestly, the TarFlow method is simple and there's probably a lot of improvements that can be made. But don't take that as a knock on this paper! I actually really appreciated it and it really set out to show what they tried to show. The real thing is just no one trained flows at this scale before and this really needs to be highlighted.

The tldr: people have really just overlooked different model architectures

[0] Used a third party reproduction so might be different but their AFHQ-256 model matches at 472M params https://github.com/encoreus/GS-Jacobi_for_TarFlow

[1] https://arxiv.org/abs/2106.04627

[2] https://arxiv.org/abs/1902.04208

[3] https://arxiv.org/abs/2102.09672

[4] https://arxiv.org/abs/2105.05233

[5] https://arxiv.org/abs/2401.11605

[Side note] Hey, if the TarFlow team is hiring, I'd love to work with you guys

jc4p 11 hours ago

i've been trying to keep up with this field (image generation) so here's quick notes I took:

Claude's Summary: "Normalizing flows aren't dead, they just needed modern techniques"

My Summary: "Transformers aren't just for text"

1. SOTA model for likelihood on ImageNet 64×64, first ever sub 3.2 (Bits Per Dimension) prev was 2.99 by a hybrid diffusion model

2. Autoregressive (transformers) approach, right now diffusion is the most popular in this space (it's much faster but a diff approach)

tl;dr of autoregressive vs diffusion (there's also other approaches)

Autoregression: step based, generate a little then more then more

Diffusion: generate a lot of noise then try to clean it up

The diffusion approach that is the baseline for sota is Flow Matching from Meta: https://arxiv.org/abs/2210.02747 -- lots of fun reading material if you throw both of these into an LLM and ask it to summarize the approaches!

  • godelski 10 hours ago

    You have a few minor errors and I hope I can help out.

      > Diffusion: generate a lot of noise then try to clean it up
    
    You could say this about Flows too. The history of them is shared with diffusion and goes back to the Whitening Transform. Flows work by a coordinate transform so we have an isomorphism where diffusion works through, for easier understanding, a hierarchical mixture of gaussians. Which is a lossy process (more confusing when we get into latent diffusion models, which are the primary type used). The goal of a Normalizing Flow is to turn your sampling distribution, which you don't have an explicit representation of, into a probability distribution (typically Normal Noise/Gaussian). So in effect, there are a lot of similarities here. I'd highly suggest learning about Flows if you want to better understand Diffusion Models.

      > The diffusion approach that is the baseline for sota is Flow Matching from Meta
    
    To be clear, Flow Matching is a Normalizing Flow. Specifically, it is a Continuous and Conditional Normalizing Flow. If you want to get into the nitty gritty, Ricky has a really good tutorial on the stuff[0]

    [0] https://arxiv.org/abs/2412.06264

    • jc4p 9 hours ago

      thank you so much!!! i should’ve put that final sentence in my post!

      • godelski 9 hours ago

        Happy to help and if you have any questions just ask, this is my jam

MBCook a day ago

I wonder if it’s noticeably faster or slower than the common way on the same set of hardware.

  • yorwba 6 hours ago

    Figure 10 in https://arxiv.org/pdf/2506.06276 has a speed comparison. You need fairly large batch sizes for this method to come out ahead. The issue is that the architecture is very sequential, so you need to be generating several images at the same time to make good use of GPU parallelism.

lnyan a day ago

normalizing flow might be unpopular but definitely not a forgotten technique

layer8 12 hours ago
  • tomhow 6 hours ago

    Thanks. I looked at that thread and it wasn't great, with most of the comments being meta-commentary related to the article and Apple's AI progress rather than the actual research paper.

    I've decided to keep this thread on the front page, move the on-topic comments from that other thread to this one, and leave the rest of it in the past.