I have a list of elements following
("(aviyon" "213" "flyingman" "no))") as list
What i want is that I want to split this list containing strings using parentheses as splitter but also want to include these parentheses in a new list without breaking the order
My desired output of new list(or same list modified)
("(" "aviyon" "213" "flyingman" "no" ")" ")")
I am coming from imperative languages and this would be 15 minute job in Java or C++. But here i'm stuck what to do. I know i have to
1- Get a element from list in a loop
I think this is done with (nth 1 '(listname) )
2- separate without removing delimiter put in to a new list
I found functions such as SPLIT-SEQUENCE but i can't do without removing it and without breaking original order.
Any help would be appreciated.
Let's have another answer, without external libraries. Like you already did, we can split in the problem into smaller parts:
all-tokensapply this function on all strings in your input list, and concatenate the result:
The first part, taking a state and building a list from it, looks like an
unfoldoperation (anamorphism).Fold (catamorphism), called
reducein Lisp, builds a value from a list of values and a function (and optionally an initial value). The dual operation,unfold, takes a value (the state), a function, and generate a list of values. In the case ofunfold, the step function accepts a state and returns new state along with the resulting list.Here, let's define a state as 3 values: a string, a starting position in the string, and a stack of tokens parsed so far. Our step function
next-tokenreturns the next state.The main function which gets all tokens from a string just computes a fixpoint:
We need an auxiliary function:
The step function is defined as follows:
You can try with a single string:
If you take the resulting state values and reuse them, you have:
And here, the second return value is NIL, which ends the generation process. Finally, you can do:
Which gives:
The above code is not fully generic in the sense that
all-tokensknows too much aboutnext-token: you could rewrite it to take any kind of state. You could also handle sequences of strings using the same mechanism, by keeping more information in your state variable. Also, in a real lexer you would not want to reverse the whole list of tokens, you would use a queue to feed a parser.