使用Scala基於詞法單元的解析器定製EBNF範式文法解析

前言

近期在作Oracle遷移到Spark平臺的項目上遇到了一些平臺公式翻譯爲SparkSQL(on Hive)的需求,而Spark採用親媽語言Scala進行開發。下面是個意外,被論文查重了,移步至個人Leanote博客查看點我,先亂碼一段時間[分後,擬使中的EB式,進行基於@#@#@法解析。sql

平臺公式及翻譯後的SparkSQL

平臺公式的樣子以下所示:ide

if (XX1_m001[D003]="邢おb7骯α䵵薇" || XX1_m001[H003]<"2") && XX1_m001[D005]!="wed" then XX1_m001[H022,COUNT]

這裏面字段值"邢おb7骯α䵵薇"爲這個的目的是爲了測試各類字符集是否都能匹配知足。
那麼對應的SparkSQL應該是這個樣子的,因爲是使用的Hive on Spark,於是長得跟Oracle的SQL語句差很少:函數

SELECT COUNT(H022) FROM XX1_m001 WHERE (XX1_m001.D003='邢おb7骯α䵵薇' OR  XX1_m001.H003<'2')  AND  XX1_m001.D005<'wed'

整體而言比較簡單,由於我只是想在這裏作一個Demo。post

平臺公式的EBNF範式及詞法解析設計

expr-condition ::= tableName "[" valueName "]" comparator Condition
expr-front ::= expr-condition (("&&"|"||")expr-front)*
expr-back ::= tableName "[" valueName "," operator "]"
expr ::= "if" expr-front "then" expr-back

其中詞法定義以下測試

operator => [SUM,COUNT]
tableName,valueName =>ident  #ident爲關鍵字
comparator => ["=",">=","<=",">","<","!="]
Condition => stringLit  #stringLit爲字符串常量

使用Scala基於詞法單元的解析器解析上述EBNF文法

Scala基於詞法單元的解析器是須要繼承StandardTokenParsers這個類的,該類提供了很方便的解析函數,以及詞法集合。
咱們能夠經過使用lexical.delimiters列表來存放在文法翻譯器執行過程當中遇到的分隔符,使用lexical.reserved列表來存放執行過程當中的關鍵字。
好比,咱們參照平臺公式,看到"=",">=","<=",">","<","!=","&&","||","[","]",",","(",")"這些都是分隔符,其實咱們也能夠把"=",">=","<=",">","<","!=","&&","||"當作是關鍵字,可是我習慣上將帶有英文字母的單詞做爲關鍵字處理。於是,這裏的關鍵字集合即是"if","then","SUM","COUNT"這些。
表如今代碼中是醬紫的:scala

lexical.delimiters += ("=",">=","<=",">","<","!=","&&","||","[","]",",","(",")")
lexical.reserved   += ("if","then","SUM","COUNT")

是否是so easy~。
咱們再來看一下如何使用基於詞法單元的解析器解析前面咱們設計的EBNF文法呢。我在這裏先上代碼:翻譯

class ExprParsre extends StandardTokenParsers{
  lexical.delimiters += ("=",">=","<=",">","<","!=","&&","||","[","]",",","(",")")
  lexical.reserved   += ("if","then","SUM","COUNT")

  def expr: Parser[String] = "if" ~ expr_front ~ "then" ~ expr_back ^^{
    case "if" ~ exp1 ~ "then" ~ exp2 => exp2 + " WHERE " +exp1
  }

  def expr_priority: Parser[String] = opt("(") ~ expr_condition ~ opt(")") ^^{
    case Some("(") ~ conditions ~ Some(")") => "(" + conditions +")"
    case Some("(") ~ conditions ~ None => "(" + conditions
    case None ~ conditions ~ Some(")") => conditions +")"
    case None ~ conditions ~ None => conditions
  }

  def expr_condition: Parser[String] = ident ~ "[" ~ ident ~ "]" ~ ("="|">="|"<="|">"|"<"|"!=") ~ stringLit ^^{
    case ident1~"["~ident2~"]"~"="~stringList => ident1 + "." + ident2 +"='" + stringList +"'"
    case ident1~"["~ident2~"]"~">="~stringList => ident1 + "." + ident2 +">='" + stringList +"'"
    case ident1~"["~ident2~"]"~"<="~stringList => ident1 + "." + ident2 +"<='" + stringList +"'"
    case ident1~"["~ident2~"]"~">"~stringList => ident1 + "." + ident2 +">'" + stringList +"'"
    case ident1~"["~ident2~"]"~"<"~stringList => ident1 + "." + ident2 +"<'" + stringList +"'"
    case ident1~"["~ident2~"]"~"!="~stringList => ident1 + "." + ident2 +"!='" + stringList +"'"
  }
  def comparator: Parser[String] = ("&&"|"||") ^^{
    case "&&" => " AND "
    case "||" => " OR "
  }
  def expr_front: Parser[String] = expr_priority ~ rep(comparator ~ expr_priority) ^^{
    case exp1 ~ exp2  => exp1 +  exp2.map(x =>{x._1 + " " + x._2}).mkString(" ")  
  }
  def expr_back: Parser[String] = ident ~ "[" ~ ident ~ "," ~ ("SUM"|"COUNT") ~ "]" ^^ {
    case ident1~"["~ident2~","~"COUNT"~"]" => "SELECT COUNT("+ ident2.toString() +") FROM " + ident1.toString()
    case ident1~"["~ident2~","~"SUM"~"]" => "SELECT SUM("+ ident2.toString() +") FROM " + ident1.toString()
  }

  def parserAll[T]( p : Parser[T], input :String) = {
    phrase(p)( new lexical.Scanner(input))
  }

}
相關文章
相關標籤/搜索