
Run serverless GPUs on your own cloud
Whip (by Tensorfuse) is a social platform where anyone can create and share mini apps in seconds. We're backed by Y Combinator, and building at the intersection of AI and consumer social.
As a founding mobile engineer, you'll own the core app experience that millions of creators will use to build, share, and discover mini apps. This is not a "build screens from Figma specs" role. This is a "make the app feel magic" role.
Build and ship the Whip mobile app in React Native across iOS and Android. You'll own critical surfaces like the mini app runtime, feed, creation flow, and sharing experience. You'll work directly with the founders and the founding designer to move from idea to shipped feature in days, not sprints. You'll make architectural decisions that define how the app scales. You'll obsess over performance, animation smoothness, and interaction feel — because in a consumer app, 200ms of jank is the difference between delight and deletion.
You're a builder who ships fast and cares deeply about craft. You've built React Native apps that real people use and love. You understand that consumer mobile is a different game — every frame matters, every transition matters, every tap target matters. You're comfortable going deep into native modules when React Native isn't enough, and you have opinions about state management, navigation patterns, and how to make a WebView not feel like a WebView.
Send us a link to an app you've built — on the App Store, Play Store, or a video demo. Walk us through the hardest technical problem you solved in it. Show us your code if you can. That matters more than a resume.
The future of AI is inference
With the rise of agentic workflows and reasoning models, enterprises now need 100x more compute and 10x more throughput to run state-of-the-art AI models. Building robust, scalable inference systems has become a top priority—but it's also a major bottleneck, requiring deep expertise in low-level systems, snapshotters, Kubernetes, and more.
Tensorfuse removes this complexity by helping teams run serverless GPUs in their own AWS account. Just bring:
We handle the rest—deploying, managing, and autoscaling your GPU containers on production-grade infrastructure. Teams use Tensorfuse for:
We’re building the runtime layer for AI-native companies. Join us.