winrt::hstring
is convertible to std::basic_string_view
which comes in handy quite often. However, I am unable to do the same for IVectorView.
Looking at the interface of IVector
, I imagine you would have to convert it back to the underlying implementation type so I tried
using impl_type = winrt::impl::vector_impl<float, std::vector<float>, winrt::impl::single_threaded_collection_base>;
winrt::Windows::Foundation::Collections::IVectorView vector_view = GetIVectorView();
auto& impl = *winrt::get_self<impl_type>(vector_view);
auto& container = impl.get_container();
which compiles but container.size()
is 0
which is incorrect.
Edit:
vector_view
was the result of the TensorFloat.GetAsVectorView
Method. So I can solve my problem by using the TensorFloat.CreateReference
Method to get a IMemoryBufferReference
instead of a IVectorView
.
However, I'd still like to know whether IVectorView
can be converted to a std::span, if not why is this not allowed.
The IVector
and IVectorView
interfaces are specifically designed not to expose the underlying contiguous memory, probably to support cases where there is no underlying contiguous memory or the implementation language doesn't expose it as such (javascript??).
You probably could get back the implementation type in when you know cppwinrt provides the implementation, however I'm my case there is no possible way of knowing the implemention type. In any case, it's inadvisable to do this.
In my case it would have been better if TensorFloat.GetAsVectorView
did not exist so I could find TensorFloat.CreateReference
.
Also it would be nice if cppwinrt made themselves range-v3 compatible. But until the most advisable thing to do is just copy to a std::vector.