Sign in with Twitter

Username:

Stanford NLP Group @stanfordnlp Stanford, CA, USA

Computational Linguistics—Natural Language—Machine Learning—Deep Learning—Technology from Silicon Valley. (@chrmanning, @jurafsky, @percyliang & @ChrisGPotts)

99 Following   86,151 Followers   6,178 Tweets

Joined Twitter 2/28/10


It’ll be great having @tallinzen visiting Stanford Linguistics next Tuesday to talk about What Inductive Biases Ena… https://t.co/7CnbWI7Vb8
1/18
2020
The Diversity-Innovation Paradox in Science: Why does greater diversity in research teams increase innovation but n… https://t.co/cj6b3MsnWiA belated blog post for our BERTology EMNLP paper (by Olga Kovaleva, Alexey Romanov, yours truly and @arumshisky).… https://t.co/hL4ylYOqZg
Retweeted by Stanford NLP Group
1/17
2020
@RsCircus @jurafsky Yes, it was a Coursera course! Our @nlp_class was one of the first 6 courses launched after the… https://t.co/0dVTtYSDkLHey #NLP / Grounded Language folks -- VLN is now in Habitat. Instruction following ("Go outside the room, stop at… https://t.co/Uegwsypf2U
Retweeted by Stanford NLP GroupIt has part of speech tagging with #nltk + stanford pos tagger: https://t.co/027cDvFabC
Retweeted by Stanford NLP GroupWhat Does #BERT Look At? An Analysis of BERT’s Attention #deeplearning https://t.co/W6Kzn2iSde https://t.co/QfZob48w5R
Retweeted by Stanford NLP GroupAnalyzing Polarization in Social Media: Method and Application to Tweets on 21 Mass Shootings #deeplearninghttps://t.co/bDjoe2VYEM
Retweeted by Stanford NLP GroupAutomatically Neutralizing Subjective Bias in Text #deeplearning https://t.co/bo73uvZB37 https://t.co/uNTdP92GX2
Retweeted by Stanford NLP GroupGreat advice from @Thom_Wolf on why you should release your code and make it as easy to use! It usually takes me >… https://t.co/LAVhX3LXjw
Retweeted by Stanford NLP GroupProfessor Ellie Pavlick's work with language acquisition and information retrieval has recently won the largest ($6… https://t.co/P08Cxnb7bC
Retweeted by Stanford NLP GroupWow @huggingface tokenizers 👏👌 How often do you get to swap out 3 lines of code and become 5x more productive?
Retweeted by Stanford NLP Group
1/15
2020
one project to applaud here is the Universal Dependencies, where there really is a lot of thought and attention and… https://t.co/FKN42ZYzPV
Retweeted by Stanford NLP GroupProf. Percy Liang from Stanford speaking at Pinterest Labs about Learing from Language. Live stream:… https://t.co/KCuGzlmg5d
Retweeted by Stanford NLP Group🔥 Introducing Tokenizers: ultra-fast, extensible tokenization for state-of-the-art NLP 🔥 ➡️https://t.co/BiOfFHfUdL https://t.co/M8eT59A3gg
Retweeted by Stanford NLP Group. @emnlp2020 CFP seems to be up, deadline May 8: https://t.co/jXTAvb5ZAc
Retweeted by Stanford NLP Group
1/14
2020
The funny thing about AIDungeon is due to all the GPT-2 pre-training on swaths of random web text, you can create a… https://t.co/3liARVs6La
Retweeted by Stanford NLP GroupInterested in joining our research community? We're now accepting applications for fixed-term assistant professor (… https://t.co/DHtdrn3eoZ
Retweeted by Stanford NLP GroupA really nice read on "Finding Syntax with Structural Probes by" John Hewitt. https://t.co/mkI2nMS577
Retweeted by Stanford NLP GroupHere's a thread surveying some 'classic' work on #compositionality. Lots of people seem to be discussing this right… https://t.co/G3gJhzjKiv
Retweeted by Stanford NLP GroupPostdoc Fellowships at the new Stanford Data Science Institute! 2-3 year positions, deadline soon, apply by Jan 21! https://t.co/W3oGIWkPV5
Retweeted by Stanford NLP Group
1/12
2020
The best ever explanation of Attention i have ever seen : https://t.co/GVkwhPCGuX thanks to @Stanford and… https://t.co/P1Yv8IKsDm
Retweeted by Stanford NLP Grouphttps://t.co/hKrTbY8eLE Stanford is uploading its content for CS221 (Artificial Intelligence) and is taught by Prof… https://t.co/GAZ7wyIgWj
Retweeted by Stanford NLP GroupFinding Syntax with Structural Probes https://t.co/kZBotXsfS0 #artificialintelligence, #datascience, #datasciencehttps://t.co/W4oTylvfMj
Retweeted by Stanford NLP GroupAfter reading @chrmanning et al’s paper on where bert looks at, https://t.co/xPcAjIBlVU, this makes intuitive sense… https://t.co/m8tL7Q3W0G
Retweeted by Stanford NLP GroupVery happy to share our latest work accepted at #ICRL2020: we prove that a Self-Attention layer can express any CNN… https://t.co/biGlHtDUJe
Retweeted by Stanford NLP Group
1/11
2020
Now that neural nets have fast implementations, a bottleneck in pipelines is tokenization: strings➡️model inputs.… https://t.co/BSnaKctzpz
Retweeted by Stanford NLP Group🔥“we train a model on one GPU for 4 days that outperforms GPT (trained using 30x more compute) on the GLUE natural… https://t.co/SI5hbPmNsp
Retweeted by Stanford NLP GroupWe ( Ananth, @AlonTalmor, @sameer_ , @nlpmattg) , are pleased to announce the release of ORB, an Open Reading Ben… https://t.co/VWVl798dFy
Retweeted by Stanford NLP Group
1/10
2020
I'll go even a step further and suggest you guys start where Stanford's CS231n (CNNs for Visual Recognition), CS224… https://t.co/7bNiPK4F1L
Retweeted by Stanford NLP Group#SCiL2020 /#LSA2020 @ChrisGPotts' brief origin story via literature review for Rational Speech Acts (#RSA) (for… https://t.co/yIzFGhrXGh
Retweeted by Stanford NLP Group.@stanfordnlp grads at work: @roger_p_levy, now at @mitbrainandcog. https://t.co/2MQ1fgncNwtf.keras in ⁦@TensorFlow⁩ 2.1 adds TextVectorization layer to flexibly map raw strings to tokens/word pieces/ngrams… https://t.co/MeNIXR16LSFlying home from #LSA2020? Remember to put your liquids in a separate bag! https://t.co/C2yjEpTkvu
Retweeted by Stanford NLP GroupGreat introduction to AI. Lecture 1: Overview Stanford CS221: AI (Autumn 2019) With Percy Liang… https://t.co/naPfAaW7KX
Retweeted by Stanford NLP GroupFinding Syntax with Structural Probes #AI #deeplearning https://t.co/cwsKh8qh74 https://t.co/cxQ7QKPUB2
Retweeted by Stanford NLP GroupI wrote about performing sentiment analysis on tweets using @stanfordnlp #Java api. The small app is built using… https://t.co/XSWKDPtHFl
Retweeted by Stanford NLP Groupreminded today that it's been 2 years since we started the distinguished speaker series and fireside chats at MSR A… https://t.co/5nI4fu0WAv
Retweeted by Stanford NLP Group
1/9
2020
.@stanfordnlp #ICLR2020 papers #3—How to use distributionally robust optimization to avoid poor results on “atypica… https://t.co/WrVOy720FCThe Language of Food by Dan Jurafsky. https://t.co/LJyiVbX3QM
Retweeted by Stanford NLP GroupIf you are a young PI in a learning/compneuro related area, I recommend checking out the @CIFAR_News Scholars progr… https://t.co/IStFtHu1nl
Retweeted by Stanford NLP Group
1/8
2020
.@stanfordnlp people’s #ICLR2020 papers #2—ELECTRA: @clark_kev and colleagues (incl. at @GoogleAI) show how to buil… https://t.co/7YkYOQR1fc @Pinterest Labs presents a distinguished lecture by @Stanford professor Percy Liang on Learning from Language on 1/… https://t.co/NkjxWiWS9z
Retweeted by Stanford NLP Group
1/7
2020
.@stanfordnlp people’s #ICLR2020 papers #1—@ukhndlwl and colleagues (incl. at @facebookai) show the power of neural… https://t.co/IWNDsUzPvI @olasemm12 @saurabh3981 You can enroll here: https://t.co/xz2u4Ny7Dw – and, all the lectures remain free on YouTube… https://t.co/Y5jqeSxdyU10 ML & NLP Research Highlights of 2019 New blog post on ten ML and NLP research directions that I found exciting… https://t.co/54rytXTYSa
Retweeted by Stanford NLP GroupPretty generous with use of materials, etc. by non-Stanford students, too... https://t.co/5UqRU2TuYD
Retweeted by Stanford NLP Group
1/6
2020
Here’s the latest data on Stanford NLP course enrollment, @NathanBenaich—we now teach 10x the students per year as… https://t.co/z2tTwxgheX @joespeez Thanks a lot!!! Will discuss and see if anything sounds good @deliprao To get a feel for how neural modeling is moving linguistic theory at a macro-level, I like this video fro… https://t.co/MO07FfnXjc
Retweeted by Stanford NLP GroupVery excited to be teaching CS224N alongside my advisor @chrmanning, and with a wonderful cohort of TAs! https://t.co/ajkVNqbRLA
Retweeted by Stanford NLP GroupJurafsky and Martin have written an in-depth book on NLP and computational linguistics. This one is from the master… https://t.co/Jjo9qOw7Yb
Retweeted by Stanford NLP Group @NathanBenaich @suzatweet Think so, will try to get the dataThe rift in publication practices between life science and computer science. For CS people 👩‍💻, read free on arXiv.… https://t.co/lkKX0Heg26
Retweeted by Stanford NLP Group @raya_pawar This version is for Stanford students and SCPD people (already full); next available thing is XCS224N i… https://t.co/xdZqeUcaNCExcellent course for those interested in NLP. From one hot to transformer based architectures and challenging task… https://t.co/poBRlpwTHI
Retweeted by Stanford NLP Group
1/5
2020
@saurabh3981 CS224N is the live Stanford class (also available to students in industry via SCPD), which gives Stanf… https://t.co/klxAjqBT1GStanford CS224N: Natural Language Processing with Deep Learning is back for 2020, starting Jan 7, with over 500 stu… https://t.co/msg5xATZmI
1/4
2020
Brains are amazing. Our lab demonstrates that single human layer 2/3 neurons can compute the XOR operation. Never s… https://t.co/REenIg24be
Retweeted by Stanford NLP GroupExtremely interesting lecture on #GroundedNLU #cs224u L10 [https://t.co/6YRj8sIIW4] by @ChrisGPotts of @stanfordnlphttps://t.co/ByXFqsTLFT
Retweeted by Stanford NLP GroupNet of the Week: 100-dimensional word vectors trained on Wikipedia & Gigaword 5 data https://t.co/Jy8YRdSDMC. Thank… https://t.co/Ai25wsEmoc
Retweeted by Stanford NLP GroupThe 2010s were an eventful decade for NLP! Here are ten shocking developments since 2010, and 13 papers* illustrat… https://t.co/YHGLkjG2NT
Retweeted by Stanford NLP Group
1/3
2020
Today's fun: Using reactive #SpringBoot, #ReactJs and Stanford CoreNLP library to apply sentiment analysis on tweet… https://t.co/fK1dNyThwX
Retweeted by Stanford NLP Group
1/1
2019
OMG. A Stanford professor enters the pic & it gets MORE confusing! "There is variation in whether the term macaron… https://t.co/b5q0kzjU80
Retweeted by Stanford NLP GroupI really appreciate @andreas_madsen's honesty in this post - both about the struggles of working as an independent… https://t.co/gwFP2fw2nj
Retweeted by Stanford NLP Group
12/31
2019
@RandomlyWalking The set of things I find interesting in the space are structured priors which encode relationships… https://t.co/xWmub1TWvw
Retweeted by Stanford NLP GroupA new paper has been making the rounds with the intriguing claim that YouTube has a *de-radicalizing* influence.… https://t.co/k7Fjtaoz4o
Retweeted by Stanford NLP Group
12/29
2019
Yoshua Bengio's response to @GaryMarcus about their debate. https://t.co/Xg9WqLFdex
Retweeted by Stanford NLP Group
12/27
2019
@CBowdon Try this paper from @robinomial , which is maybe the first published clear adversarial attack in #NLProc https://t.co/yiYQE4zy40
12/26
2019
We spend our time finetuning models on tasks like text classif, NER or question answering. Yet 🤗Transformers had n… https://t.co/4pLF5Vc05v
Retweeted by Stanford NLP GroupThe most interesting detail was that _some_ of the Chinese systems did as well on Asian faces as white faces. Good… https://t.co/siMD0uacbkFor fellow geeks out there, we plan to use Solr for indexing, Stanford's NLP toolkit for tagging and relation extra… https://t.co/N4fRpzI22M
Retweeted by Stanford NLP Group“Today, ‘hers’ is not recognized as a pronoun by the most widely used technologies for Natural Language Processing… https://t.co/IF8bUliYa6
Retweeted by Stanford NLP Group @uralik1 presenting his research on multi-turn beam search to @chrmanning at #NeurIPS workshop on ConvAI https://t.co/FzsZMOdLLc
Retweeted by Stanford NLP Group @yoavgo @ethayarajh @qi2peng2 @chrmanning To add on this, also in rel. to @iatitov's point of GCNs being useful in… https://t.co/jTZZDlRltz
Retweeted by Stanford NLP GroupGraph neural networks are a super cool new paradigm in deep learning, and they have a lot of potential in solving b… https://t.co/8sWwaAjAgV
Retweeted by Stanford NLP GroupFacebook AI is open-sourcing XLM-R, a multilingual model that uses self-supervised training to achieve state-of-the… https://t.co/xbtqfkdQoa
Retweeted by Stanford NLP Group
12/20
2019
@yoavgo This paper by @yuhaozhangx @qi2peng2 and @chrmanning finds that you can use GCNs to improve relation extrac… https://t.co/znKQnxd2QY
Retweeted by Stanford NLP GroupThe state of NLP in 2019. I’m talking with an amazing undergrad who has already published multiple papers on BERT-… https://t.co/xty5dvzlXD
Retweeted by Stanford NLP Group
12/19
2019
Christoper Manning @StanfordAILab is the rockstar when it comes to actually developing #AI forward instead of just… https://t.co/VAxxdnAE1F
Retweeted by Stanford NLP Group🔥🔥Series A!!🔥🔥 Extremely excited to share the news with you and so in awe of what we have built with the community… https://t.co/ayrAVnRRms
Retweeted by Stanford NLP Group
12/18
2019
@DevonYoo The StackOverflow answer talking about those files is not the one talking about the Stanford Segmenter @gneubig we didn't do much with pragmatic inference at learning time--https://t.co/RTPQMzZ8qn (by @sidawxyz) and se… https://t.co/w7reasQoFP
Retweeted by Stanford NLP Group @DevonYoo Those files aren’t in our system – they’re from a different answer – ours is a machine learning systemI went to NeurIPS for the first time in 2015. I believe it was in Montreal. I wrote about my terrible experience an… https://t.co/XgdOqNCo9M
Retweeted by Stanford NLP Group
12/17
2019
I know it’s #NeurIPS but it still needs to be said: if you’re working with generated templates, you have not made a… https://t.co/TgYhv8o72J
Retweeted by Stanford NLP Group
12/15
2019
Can we simulate both a user👩‍🦰 and system 🤖 and learn to autocomplete in an unsupervised way? Yes! We frame the a… https://t.co/8n8egY6QE7
Retweeted by Stanford NLP GroupCheck out the latest blog post on our blog! Does adding a theorem to any paper increase its chance of acceptance?… https://t.co/IZPsRf6hGO
Retweeted by Stanford NLP Group
12/14
2019
Percy Liang sharing his thoughts on his earlier work on Bayesian nonparametrics https://t.co/j7YPNxaXKc
Retweeted by Stanford NLP GroupPercy Liang talks at the Retrospectives Workshop #NeurIPS2019 https://t.co/e9pUHlEHzK
Retweeted by Stanford NLP GroupI'm presenting work on regularizing visual representations with language at the #NeurIPS2019 ViGIL workshop today (… https://t.co/poIoiJOEUH
Retweeted by Stanford NLP Group“NMNs often exhibit poor generalization even when the ground-truth programs are provided. This result is remarkable… https://t.co/NQuSdJnZQk @julianharris @DeepMindAI @GoogleAI @jasonbaldridge @FelixHill84 Well worth reading, but we can’t take credit for it! More @StanfordPsych.Extending Machine Language Models toward Human-Level Understanding: "Integrated Understanding System"…"Language doe… https://t.co/HF01awU41A
Retweeted by Stanford NLP GroupSelected features are more predictive of Y and less confound-related than regression, log-odds, etc. Work by Reid P… https://t.co/lrVnYQri94What ngrams most predict outcome Y controlling for confounds C? Blog post & python package. #CausalInference for te… https://t.co/pyDbTDWGkwNumber of AI papers on arXiv 2010-2019 by sub-category Interesting ML steepest rise Computation and Language ste… https://t.co/3FjtWSLCXX
Retweeted by Stanford NLP Group @indexingai In #NLP one popular benchmark is Stanford Question Answering Dataset (SQuAD) challenge. This is a readi… https://t.co/I8WXZKisgm
Retweeted by Stanford NLP GroupOur MixTape is 3.5-10.5x faster than Mixture of Softmaxes /w SOTA results in language modeling & translation. Key i… https://t.co/x9j9UsyF5X
Retweeted by Stanford NLP GroupI'm proud to announce that in a few hours, 2019 AI Index Report will be released. It provides unbiased, comprehen… https://t.co/M8Ofz1tyy4
Retweeted by Stanford NLP Group
12/13
2019

0