I'm creating an app in RealityKit that generates a shape based on user input. For example, if the user enters a radius of 0.1 meters, the shape (sphere in my case) will have a radius of 0.1 meters, same logic for 0.2, 0.3, etc. The code all works, but I want to make it so the sphere appears when the user taps the screen.
Here is my code for the page that takes in user input:
class UserInput: ObservableObject {
@Published var score: Float = 0.0
}
struct PreviewView: View {
@ObservedObject var input = UserInput()
var body: some View {
NavigationView {
ZStack {
Color.black
VStack {
Text("Change the radius of your AR sphere")
.foregroundColor(.white)
Text("\(String(format: "%.1f", self.input.score)) meters")
.foregroundColor(.white)
.bold()
.font(.title)
.padding(10)
Button(action: {self.input.score += 0.1})
{
Text("Increment by 0.1 meters")
}
.padding(10)
Button(action: {self.input.score -= 0.1})
{
Text("Decrease by 0.1 meters")
}
.padding(10)
NavigationLink(destination: Reality(input: self.input)) {
Text("View in AR")
.bold()
.padding(.top,30)
}
}
}
.ignoresSafeArea()
}
}
Here is the code for the Reality ARView:
struct Reality: UIViewRepresentable {
@ObservedObject var input: UserInput
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let model = ModelEntity(mesh: .generateSphere(radius: input.score))
let anchor = AnchorEntity(plane: .horizontal)
anchor.addChild(model)
arView.scene.anchors.append(anchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {}
}
There are tons of examples of generating shapes when the user touches the screen, that's not my issue. The fact that I'm taking in user input is what makes this difficult.
Here is some code that does what I want, but without user input. It has some physics built in that I plan on implementing once I get the user input to work.
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let planeAnchorEntity = AnchorEntity(plane: .horizontal)
let plane = ModelEntity(mesh: MeshResource.generatePlane(width: 1, depth: 1), materials: [SimpleMaterial(color: .white, isMetallic: true)])
plane.physicsBody = PhysicsBodyComponent(massProperties: .init(mass: 1), material: .generate(friction: 1, restitution: 1), mode: .kinematic)
plane.generateCollisionShapes(recursive: true)
planeAnchorEntity.addChild(plane)
arView.scene.anchors.append(planeAnchorEntity)
arView.installGestures([.scale,.rotation], for: plane)
arView.addGestureRecognizer(UITapGestureRecognizer(target: context.coordinator, action: #selector(Coordinator.handleTap)))
context.coordinator.view = arView
return arView
}
func makeCoordinator() -> Coordinator {
Coordinator()
}
func updateUIView(_ uiView: ARView, context: Context) {}
}
Here is the Coordinator class that generates the box that I want to be adjustable in size:
class Coordinator {
weak var view: ARView?
@objc func handleTap(_ recognizer: UITapGestureRecognizer) {
guard let view = view else { return }
let location = recognizer.location(in: view)
let results = view.raycast(from: location, allowing: .estimatedPlane, alignment: .horizontal)
if let result = results.first {
let anchorEntity = AnchorEntity(raycastResult: result)
let box = ModelEntity(mesh: MeshResource.generateBox(size: 0.3),materials: [SimpleMaterial(color: .black, isMetallic: true)])
box.physicsBody = PhysicsBodyComponent(massProperties: .init(mass: 0.5), material: .generate(), mode: .dynamic)
box.generateCollisionShapes(recursive: true)
box.position = simd_make_float3(0,0.7,0)
//static means body cannot be moved
//dynamic means it can move
//kinematic means user moved the object
anchorEntity.addChild(box)
view.scene.anchors.append(anchorEntity)
}
}
}
I tried fusing these two projects into what I want, but I get all sorts of errors I have no idea how to fix, and when I try something new, a bunch of other errors appear. I think it boils down to the @ObservedObject and the fact that I have multiple classes/structs compared my project with user input. The user input will go into the coordinator class, but ultimately it is the ARViewContainer that actually renders the view.
If anyone can help me out, I would be incredibly grateful.
To increase / decrease sphere's radius and then position sphere by tap, use the following code.
Now, with working button and coordinator, it will be much easier to implement a raycasting.
import SwiftUI
import RealityKit
struct ContentView : View {
var body: some View {
PrevView().ignoresSafeArea()
}
}
Reality
view.
struct Reality: UIViewRepresentable {
@Binding var input: Float
let arView = ARView(frame: .zero)
class ARCoordinator: NSObject {
var manager: Reality
init(_ manager: Reality) {
self.manager = manager
super.init()
let recognizer = UITapGestureRecognizer(target: self,
action: #selector(tapped))
manager.arView.addGestureRecognizer(recognizer)
}
@objc func tapped(_ recognizer: UITapGestureRecognizer) {
if manager.arView.scene.anchors.isEmpty {
let model = ModelEntity(mesh: .generateSphere(radius:
manager.input))
let anchor = AnchorEntity(world: [0, 0,-2])
// later use AnchorEntity(world: result.worldTransform)
anchor.addChild(model)
manager.arView.scene.anchors.append(anchor)
}
}
}
func makeCoordinator() -> ARCoordinator { ARCoordinator(self) }
func makeUIView(context: Context) -> ARView { return arView }
func updateUIView(_ uiView: ARView, context: Context) { }
}
PrevView
view.
struct PrevView: View {
@State private var input: Float = 0.0
var body: some View {
NavigationView {
ZStack {
Color.black
VStack {
Text("\(String(format: "%.1f", input)) meters")
.foregroundColor(.yellow)
HStack {
Spacer()
Button(action: { $input.wrappedValue -= Float(0.1) }) {
Text("Decrease").foregroundColor(.red)
}
Spacer()
Button(action: { $input.wrappedValue += Float(0.1) }) {
Text("Increase").foregroundColor(.red)
}
Spacer()
}
NavigationLink(destination: Reality(input: $input)
.ignoresSafeArea()) {
Text("View in AR")
}
}
}.ignoresSafeArea()
}
}
}