
We were previously assuming that the input offsets and lengths were all in raw byte offsets into a UTF-8 string. While internally our String representation may be in UTF-8 from the external world it is seen as UTF-16, with code unit offsets passed through, and used as the returned length. Beforehand, the included test included in this commit would crash ladybird (and otherwise return wrong values). The implementation here is very inefficient, I am sure there is a much smarter way to write it so that we would not need a conversion from UTF-8 to a UTF-16 string (and then back again). Fixes: #20971
6 lines
194 B
Text
6 lines
194 B
Text
text.data = '🙃', length = 2
|
|
text.data = '🙃🙃', length = 4
|
|
text.data = '🙃hi🙃🙃', length = 8
|
|
text.data = '🙃i🙃🙃', length = 7
|
|
text.data = '🙃replaced!', length = 11
|
|
repla
|