How a Direct-to-Consumer Furniture Brand Rewrote Product Scale Rules in May 2025

How a Two-Year Old DTC Sofa Brand Turned the No-Props Restriction into a Conversion Win

In May 2025, Oakland-based DwellFold, a two-year old direct-to-consumer sofa company with $4.2M in trailing twelve month revenue, faced a new marketplace constraint: a major e-commerce partner banned any staged photos that used humans, props, or comparative objects to show scale. The ban aimed at a cleaner aesthetic, but it wreaked havoc on product clarity. Customers complained the images left them guessing about size. Return rates spiked from 12.3% to 17.8% on furniture items. DwellFold had two weeks to comply while protecting conversion and returns.

This case study walks through what DwellFold did, why it worked, the concrete implementation steps, and the measurable results over the following six months. The constraints forced a practical innovation: show scale without props, using device sensors, computed depth, and interface design. The result did not rely on stock phrases about innovation. It relied on clear engineering, UX choices, measurement, and ruthless iteration.

Why Standard Scale Cues Failed: From Misleading Props to Higher Returns

DwellFold's products are wide, low sofas where depth and seat height matter. Before May 2025 they used couches with coffee tables, lamps, and people in-frame. Those props communicated scale instantly. The new Helpful resources partner policy forbade any comparative objects. The problems that followed were specific and measurable.

    Conversion drop: product page conversion fell 18% in two weeks after the partner rolled out the rule. Return spike: returns jumped from 12.3% to 17.8% within a month, costing DwellFold an incremental $142,000 in return logistics and restocking in Q2. Support load: customer questions about dimensions increased 65%, adding headcount pressure and a projected $35,000 in extra service costs over three months.

Standard fixes were obvious but ineffective. Adding more spec tables did not help because shoppers still trust visual cues first. Zoomed-in texture shots and isolated product photos reduced ambiguity but did not replace scale cues. DwellFold needed a method to show real-world size in images without props, across devices and on the partner site, within 60 days. The solution had to be low-friction for photographers, robust to lighting, and cheap enough to roll across 120 SKUs.

A New Visual Playbook: Using Depth Maps, AR Anchors, and Ambient Shadows

DwellFold's engineering and content teams chose a three-part approach. Each part is practical and measurable.

    Capture: augment standard studio photos with depth maps captured via smartphone LiDAR or computed via a neural monocular depth model when hardware was unavailable. Render: generate a calibrated scale overlay that translates image depth to a real-world scale bar and a subtle floor shadow to anchor the product visually to a plane. Present: implement on-page interactive scale modes that let users toggle a “real size” overlay and view a measured footprint in square feet and meters.

The approach relies on camera metadata and simple geometry to compute scale. When available, LiDAR provided sub-centimeter fidelity. For other shots, DwellFold used a modern monocular depth model trained on indoor furniture datasets to produce reliable relative depth. Then they converted relative depth into absolute scale by combining a single known dimension from spec sheets - typically the seat width - with focal length and sensor size metadata to align the depth map to real units. That is the trick that removes props: you anchor at one known product dimension and compute the rest programmatically.

Rolling Out the New Imaging Pipeline: A 60-Day Implementation Timeline

Week 1-2: Rapid Experimentation and Minimum Viable Kit

Goals: prove the technique on 15 SKUs and measure immediate impact. Actions:

image

Selected 15 high-traffic SKUs representing three size families: two-seater, three-seater, sectional. Built a capture kit: one iPhone with LiDAR, a DSLR for the primary image, and a small calibration mat (allowed by partner since it is not a consumer-facing prop and can be cropped out of the final frame if needed). Wrote a small script to extract EXIF: focal length, sensor size, and GPS disabled for privacy. Tested MiDaS-style monocular depth inference on DSLR images when LiDAR was not available, and compared to LiDAR ground truth for the sample set.

Week 3-4: Depth-to-Scale Conversion and UI Mockups

Goals: translate depth maps to accurate real-world measurements and design the on-page overlays. Actions:

Implemented depth normalization: align monocular depth to LiDAR using a linear scale and offset derived from the known seat width. Error goal: under 5% mean absolute error on linear dimensions. Built rendering pipeline: generate a semi-transparent footprint overlay, a 1-meter scale bar, and a soft ground shadow that follows the furniture silhouette. All elements had to be subtle to meet aesthetic policies. Mocked UI: a toggle labeled "Show real size" and a small info icon stating "Measured from model and product specs."

Week 5-6: A/B Testing and QA

Goals: test customer response and measure effect on conversion and returns. Actions:

image

Launched an A/B test on the partner site for the 15 SKUs. Variation A had standard images without props. Variation B had the new scale overlays and interactive measurement box. Collected 30,000 product page sessions over two weeks. Key metrics: add-to-cart rate, checkout conversion, and return intent clicks to the sizing FAQ. Quality control: verified depth-to-scale accuracy on all 15 SKUs with physical measurements in the warehouse to ensure no more than 4% deviation.

Week 7-8: Scale and Automation

Goals: roll to remaining SKUs and integrate into production pipelines. Actions:

Automated depth inference for legacy photos: a server pipeline that runs monocular depth on existing images, aligns to spec table anchors, and renders overlays. Processing cost: $0.12 per image on their cloud GPU instances. Created a capture checklist for product photographers: always capture one LiDAR-enabled phone pass when possible; include EXIF retention; mark the anchor dimension used. Policy compliance: collaborated with partner content guidelines to verify overlays were allowed and did not constitute props.

From 12.3% Returns to 8.1%: Tangible Results in Six Months

DwellFold measured impact at 30, 90, and 180 days. Results were specific, repeatable, and financially significant.

    Conversion uplift: A/B test over two weeks showed a 22% relative increase in add-to-cart rate for images with scale overlays. Net checkout conversion rose 14% after rollout across the partner catalog. Return reduction: product returns on furniture SKUs fell from 17.8% during the immediate post-policy shock to 8.1% six months after rollout, a 54% reduction versus the high point and a 34% reduction versus the pre-policy baseline. Annualized savings in logistics and restocking were projected at $420,000. Support load: sizing questions decreased by 48%, saving an estimated $52,000 in customer service expenses in the first six months. Implementation cost: total project cost was $74,000, including one-time engineering and tooling plus $0.12 per image processing. Payback occurred within 3 months based on return reductions alone.

Those numbers came from direct measurement of web analytics, return records, and support ticket volumes. The engineering team kept a tight audit trail: every rendered overlay had a metadata token linking back to the anchor dimension and the inference method used (LiDAR or monocular). That traceability made A/B analysis clean and defensible.

3 Practical Lessons from Replacing Props with Computed Scale

Lesson 1: One known dimension is enough. You do not need precise depth for every pixel. Anchor the depth map to a single verified product measurement and compute the rest. This converts relative depth to absolute scale with small error when done correctly.

Lesson 2: Make the UI explainable. Users mistrust overlays that look automated. Include a short line that explains the overlay derives from product measurements. That transparency reduced calls to customer service and bolstered trust.

Lesson 3: Test across the entire funnel. Improve product photo trust but measure its effect on returns and support. Conversion is only half the story - returns kill margins. The company avoided traps where optimized imagery increased purchases but also increased returns.

Extra operational rule: keep a fall-back. When neither LiDAR nor reliable EXIF exists, flag the product for a short re-shoot. Cheap re-shoots beat uncertain images that drive returns.

How Any Product Team Can Copy This Without a Big Budget

If you sell products and are forced to remove props, here is a practical playbook you can execute in under 90 days. I give the minimum viable steps and a quick thought experiment to test assumptions before committing engineering resources.

Minimum Viable Playbook

Pick 10 representative SKUs and measure baseline: conversion, return rate, and support volume. Capture one LiDAR-enabled phone pass per SKU. If no LiDAR is available, capture multiple angles to enable photogrammetry or rely on monocular depth models. Extract EXIF metadata and identify one reliable anchor dimension from your spec sheet. Seat width, cushion depth, or leg-to-floor height work well. Run a depth-to-scale script: infer depth, align to anchor, render a subtle scale bar and footprint overlay, and create a toggle UI for product pages. A/B test for two weeks. Track add-to-cart, checkout conversion, return initiations, and sizing support tickets. Iterate based on error: if mean absolute error on dimensions exceeds 6%, adjust capture process or schedule targeted re-shoots.

Thought Experiments to Validate Your Approach

Thought experiment 1: Imagine one of your best-selling items receives 50% of clicks but has a 20% return rate due to size confusion. If a visual scale overlay reduces returns by 40% and increases conversion by 10%, calculate the net margin impact. Do the math before investing. For DwellFold that calculation justified the $74,000 spend within three months.

Thought experiment 2: Picture a marketplace that forces plain white background and bans overlays. Could you use interactive AR instead? If your AR viewer can place the product at true scale in the shopper's room using camera-to-floor distance, it likely beats 2D overlays but requires heavier investment. Test a single AR flow on high-ticket items first.

Advanced Techniques Worth Adding Later

    Photogrammetry for hero SKUs: build a textured mesh and let users rotate the 3D model. High cost, high payoff for premium items. Edge inference: run monocular depth on device to remove server costs and preserve privacy. Dynamic scale tied to viewer distance: if the browser or app can detect device distance, scale overlays can adapt for better perceived size. This depends on APIs and privacy settings.

All these are optional. The core technique that moves the needle is depth anchored to a single known measurement, presented clearly.

Turning a Policy Constraint into a Competitive Advantage

DwellFold treated the partner restriction not as a loss but as a forcing function. The result was not aesthetic maximalism. It was a precise engineering and UX solution that reduced returns, increased conversion, and lowered support costs. The company now requires capture of at least one depth-enabled pass for every new SKU. They publish a short note on pages telling customers how the scale is computed, which cut support volume further.

If you sell physical goods and you cannot use props, adopt the same playbook: anchor depth to a known measurement, render subtle but informative overlays, and measure the full funnel. Do the math. Run a tight A/B test. If the numbers look like DwellFold, the payback will be quick and the result will stop guessing about size and start selling more of what customers can trust.