Should we open-source our internal deployment tool or keep it proprietary? 2 years of development, 40K lines of Go, used by 3 teams internally. Two competitors have similar open-source tools with growing communities.
Open-source the deployment tool under Apache 2.0 after a 90-day extraction phase that separates the core...
Decision
Open-source under Apache 2.0, but ONLY after a 90-day extraction phase separating the core orchestration engine (~25K lines) from internal-specific integrations (~15K lines of auth, service discovery, secrets hooks) via a gRPC-based plugin interface modeled on HashiCorp's go-plugin library. Hard go/no-go gate: run dependency analysis using `go list -m all` and import graph tooling (loov/goda). If >30% of packages have circular dependencies on internal-only modules, STOP — extraction cost will exceed 6 engineer-months and the project isn't architecturally ready. Do NOT publish the raw internal codebase under any circumstances. Post-release viability threshold: 100 GitHub stars and 10 external contributors within 6 months; miss both and reduce to passive maintenance. Allocate 0.5 FTE for community management (issue triage, contributor onboarding, security disclosure process). Key failure mode: extraction balloons to 180+ days due to deep coupling, burning engineering capacity with nothing shipped. Secondary failure: fewer than 50 external contributors at 12 months signals net-negative ROI.
Next actions
Council notes
Evidence boundary
Observed from your filing
- Should we open-source our internal deployment tool or keep it proprietary? 2 years of development, 40K lines of Go, used by 3 teams internally.
- Two competitors have similar open-source tools with growing communities.
Assumptions used for analysis
- The 40K-line Go codebase can be meaningfully separated into a ~25K core engine and ~15K internal integration layer without requiring a ground-up rewrite
- Apache 2.0 licensing is appropriate given two competitors already using permissive open-source licenses with growing communities
- 0.5 FTE is available and sufficient for first-year community management without starving the 3 internal teams of engineering support
- The deployment tool solves a sufficiently general problem that external contributors will emerge — it is not so niche that community never materializes
- The 3 internal teams can tolerate a 90-day period where the tool undergoes architectural refactoring while remaining functional
- existing stack defaulted: greenfield assumed (not_addressed)
Inferred candidate specifics
- Open-source under Apache 2.0, but ONLY after a 90-day extraction phase separating the core orchestration engine (~25K lines) from internal-specific integrations (~15K lines of auth, service discovery, secrets hooks) via a gRPC-based plugin interface modeled on HashiCorp's go-plugin library. Hard go/no-go gate: run dependency analysis using `go list -m all` and import graph tooling (loov/goda). If >30% of packages have circular dependencies on internal-only modules, STOP — extraction cost will exceed 6 engineer-months and the project isn't architecturally ready. Do NOT publish the raw internal codebase under any circumstances. Post-release viability threshold: 100 GitHub stars and 10 external contributors within 6 months; miss both and reduce to passive maintenance. Allocate 0.5 FTE for community management (issue triage, contributor onboarding, security disclosure process). Key failure mode: extraction balloons to 180+ days due to deep coupling, burning engineering capacity with nothing shipped. Secondary failure: fewer than 50 external contributors at 12 months signals net-negative ROI.
- Run `go list -m all` and loov/goda import graph analysis on the 40K-line codebase to produce a dependency map showing which packages have circular dependencies on internal-only modules, and calculate the percentage against the 30% go/no-go threshold.
- b003 had the highest confidence (0.88), survived three rounds of adversarial review with strengthening actions from multiple models in Rounds 1-3, and provided the most specific actionable framework including named tools, quantified thresholds, and explicit failure modes. No surviving branch had higher confidence.
- Business Source License (BSL) instead of Apache 2.0 with simpler modularization
- b006 proposed BSL to prevent hyperscaler commoditization, which is a valid concern. However, BSL significantly dampens community adoption — contributors are wary of license restrictions, and the two competing tools already use permissive licenses. For a 40K LoC tool trying to catch up to established competitors with growing communities, BSL creates an adoption headwind you cannot afford. The simpler modularization (refactoring into libraries vs. gRPC plugin interface) also underestimates the coupling risk that b003's extraction gate explicitly addresses.
- Dual-track licensing with enterprise version targeting paying customers
- b005's dual-track strategy (Apache 2.0 core + commercial enterprise) is architecturally similar to b003 but adds premature monetization complexity. With only 3 internal teams as users and no existing external user base, targeting 3 paying enterprise customers within 18 months is speculative. b003's approach is more disciplined: prove community viability first, then layer monetization. b005's success metrics (500 stars, 50 contributors in 12 months) are also more aggressive than b003's without providing the extraction safeguards.
- Keep proprietary due to BSL/licensing risk concerns
Inferred specifics table
| Value | Kind | Basis | Where introduced |
|---|---|---|---|
| Apache 2.0 | version | synthetic | chosen_path |
| Allocate 0.5 | version | synthetic | chosen_path |
| after a 90-day extraction phase separating the | estimate | synthetic | chosen_path |
| ~25K lines | estimate | synthetic | chosen_path |
| ~15K lines of auth | estimate | synthetic | chosen_path |
| If >30% of packages have circular | threshold | synthetic | chosen_path |
| will exceed 6 engineer-months and the project | estimate | synthetic | chosen_path |
| viability threshold: 100 GitHub stars and 10 | threshold | synthetic | chosen_path |
| balloons to 180+ days due to deep | estimate | synthetic | chosen_path |
| fewer than 50 external contributors at 12 | estimate | synthetic | chosen_path |
| contributors at 12 months signals net-negative ROI | estimate | synthetic | chosen_path |
| against the 30% go/no-go threshold | threshold | synthetic | next_action |
| 0.88 | estimate | synthetic | selection_rationale |
| in Rounds 1-3 | estimate | synthetic | selection_rationale |
| Apache 2.0 | version | synthetic | rejected_alternatives.path |
| Apache 2.0 | version | synthetic | rejected_alternatives.rationale |
| customers within 18 months is speculative | estimate | synthetic | rejected_alternatives.rationale |
| 500 stars | estimate | synthetic | rejected_alternatives.rationale |
| 50 contributors in 12 months | estimate | synthetic | rejected_alternatives.rationale |
| about HashiCorp's 2023 BSL pivot | estimate | synthetic | rejected_alternatives.rationale |
Unknowns blocking a firmer verdict
- Apache 2.0 vs. BSL licensing tradeoff: b004 and b006 raised valid concerns about hyperscaler commoditization risk that b003 does not address. If the tool gains significant traction, the Apache 2.0 choice is irreversible and may require a HashiCorp-style re-licensing controversy later.
- The 30% circular dependency threshold and 6 engineer-month cost ceiling are synthetic estimates — no named benchmark or study supports these specific numbers. Actual extraction complexity could be significantly different.
- Whether 0.5 FTE is sufficient for community management is untested — comparable projects (e.g., early-stage CNCF tools) often require more investment to reach critical mass.
- Competitor response is unmodeled: two competitors with growing communities may accelerate feature development once they see a new entrant, potentially negating the community catch-up strategy.